Thursday, May 03, 2012

MASS AGO Conference

I recently had an opportunity to attend (and present at) the Massachusetts 2012 National Cyber Crime Conference, and wanted to take the time to present some takeaways.

As with many conferences, there were a number of excellent presentations at this conference, as well as opportunities for a number of bountiful conversations that occurred outside of the classrooms.

I had an opportunity to sit next to Det. Cindy Murphy at lunch both Mon and Tues, and something we talked about on Tues included the thought that forensic analysts need to understand what their "customers" need, and end-users of analysis (investigators, attorneys, corporate senior level managers, etc.) need to know what is possible and available.  Now, this is nothing new, but clearly something that still needs to be addressed.  When I say "still", I don't mean that it hasn't been addressed, but due to the volatile nature of jobs and the community as a whole, this is something that has to be readdressed and refreshed over time.  The issue seems to be that we have on one side of the table some very technical folks (the analysts) who are performing analysis, and on the other side of the table we have less technical folks who need to make use of the results of the analysis, whether it's an investigator or prosecutor pursuing a case, or a senior-level executive who needs to make a business decision.  There's very often a translation that needs to occur and in most cases, there isn't someone who is specifically designated to do that.  As an extension of that, many cases involve the end customer asking the analyst to "find bad stuff", most often with little to no context, and then the analyst heading off to perform the work without asking any further questions.  End customers need to understand that can be achieved through forensic analysis, especially if specific goals provided.

I had another conversation shortly after a presentation that led me to understand that, as powerful as they can be, one limitation of open source tools is the lack of access to information that is only available from the operating system vendor or application developers.  Specifically, information regarding how certain fields within data structures are populated (created, modified, etc.).  Most times, those creating open source tools for use in digital analysis only have access to a very limited amount of information in order to explain how a particular artifact is created/modified, or a field is populated, which they obtain via observation and monitoring.  However, what the tool authors don't have access to is the inside information regarding all the ways in which the various fields are populated, managed and modified.  Examples of this include (but are not limited to):

- The algorithm used to manage the "count" value in the UserAssist subkey values
- What can populate the TypedURLs key within the Registry
- Mandiant's blog post regarding Shim Cache Data

As such, in most cases, open source tools are likely going to give an analyst access to information, and it is up to the analyst to collect and correlate additional information, in order to build context to the data and provide a more accurate interpretation of it...yes, this is a pitch for timeline analysis.  ;-)

Overall, I think the conference went extremely well.  I spoke to a number of attendees who were very happy with how the conference was set up and organized, as well as the content that was offered.  I'm told that the conference had 460+ attendees, and was very well received.  I hope to be invited to future events.

Thanks to the folks who set up the conference, and a special thanks to Natasha and Chris for making the conference the success that it was!

No comments: