Wednesday, December 30, 2009

Lions, and tigers, and DECAF...oh, my!

Most of us are aware that DECAF is back, with a new version. Ovie had interviewed Mike and posted the podcast (I'd also posted, and others had commented), and the next day, the original DECAF was taken down. Now it's back, like Die Hard sequels, or a bad MRE...but I kid. See? Smiley face. ;-)

So, how do DECAF and tools like it affect the current state of incident response (IR) and digital forensics (DF) analysis, if at all?

In order to discuss this, let's go back to COFEE. MS released COFEE as an LE-only tool, and IMHO, that's the reason for the hoopla surrounding its release and subsequent "leak"...that it was LE-only. The fact of the matter is that while COFEE was released by MS, and includes the use of several native Windows apps and some from MS/SysInternals, it's really just a glorified batch file with some add-ons to make it's use and deployment easier...which, I think, is the key to understanding the nature of COFEE.

While COFEE was released by MS, it wasn't developed by MS in the way that Windows 7 or MSWord were...instead, it was a small group within MS that led the charge on developing and releasing COFEE to law enforcement.

Now, please understand...I am not disparaging the efforts of this group at all. In fact, I applaud their efforts in stepping forward and taking the first step to do something. I mean, there have been plenty of similar frameworks out there for folks to choose from, and some of them very good...but for some reason, COFEE was very well received. However, I will suggest that perhaps making something too easy to use isn't always the best solution. Sometimes, when it really comes down to it, it may be better...albeit not necessarily easier... to educate the user than it is to make a tool easier to use and deploy.

Given that, my own personal, overall assessment of COFEE is that it's a tool produced by folks who don't do a great deal of IR or "live response" work, for folks who don't do a great deal of IR or "live response" work. Don't make this statement out to be something that it isn't...I'm not disparaging anyone when I say this. All I'm saying is that in the corporate arena, many of us have taken the time and effort to educate ourselves on the issues and nuances of live response, whereas LE may not have had that sort of time...after all, when I'm reading or testing stuff, I'm sure most LE are diligently working to put bad guys in jail.

Then there was DECAF. I remember that there was discussion about "LE-only" this and "vulnerabilities" that. However, listening to the CyberSpeak podcast interview, there's no specific mention of what those "vulnerabilities" are. The "vulnerability" that Mike and friends found in COFEE was never specifically stated during the interview, pretty much leaving that particular issue up to speculation on the part of the community as a whole.

So where does that leave us? I'd suggest that the release of DECAFv2 really doesn't change anything at all. Here's why...

First, I don't necessarily see a vast deployment of this sort of tool. Think about it...how much encryption is being used for nefarious purposes? From what I've seen over the past 10 or so years, as well as from talking to others, the use of steganography outside of the academic or lab environment seems to be another one of those "urban legends" of digital forensics. Even when looking at some pretty sophisticated or large-scale intrusions, I (and others) haven't seen a great deal of what's generally referred to "anti-forensics". I hate to say it, but it really isn't necessary.

Second, I don't see this being deployed on servers to any great extent. I mean, really...think about it. Your company gets compromised and a great deal of money is spent to get a consulting team on-site to assist. Do you want to be the one to explain to the CEO why you installed DECAF on a system, when the efforts of the responders were hampered, or worse, the system BSoD'd?

Third, DECAF is signature-based, and does ship with some default signatures (although I have no idea what "FTK Imager Port" is from the Press Release page...I don't use any tool named that, so I guess I'm safe). Beyond that, how many users are going to go about crafting and distributing custom signatures? I mean...really. Better yet, who's going to write the DECAF signature for DECAF?

Fourth, let's say that you do encounter a system with DECAF installed...it's likely going to be an uber-admin's desktop or laptop...and there are other ways to collect data. Like pull the hard drive out and image it. Sure, you may not be able to get volatile data, but you may not need it for what you're doing, or you may have other data available to you.

Finally, I have no doubt that someone's going to come up with a DECAF detection tool. There are tools available that will detect the presence of whole disk or volume encryption on live systems, and detecting the presence of DECAF running...and doing so using a tool with random signatures (remember RootkitRevealer??) makes use of capabilities that have already been employed. In a way, this reminds me of the whole escalation stuff that happens with combat weapons. Tanks have armor, so someone makes a TOW missile. Then someone puts reactive armor on the tank. Then someone else puts a probe on the end of the TOW missile. And on. And on. DECAF comes out...someone else will ultimately produce a DECAF detection tool.

Circling back around to the beginning of the post, what we, and in particular LE, don't necessarily need is easier to use tools. Sure, there's a certain level of usability in GUI-based tools over CLI-based tools, but what's needed is education, first and foremost. After all, a knowledgeable responder is better prepared. Knowing is half the battle! or something like that...

Monday, December 28, 2009

Investigating Breaches, pt. II

As a follow-on (or one of them, anyway) to my previous post on investigating breaches, I wanted to perhaps scratch the surface a bit more (as opposed to "digging deeper") regarding the subject.

Take for instance a Windows XP laptop, which is a pretty typical corporate configuration. Let's say that some suspicious activity was seen originating from this system by network admins, when the system was connected to the corporate network. Management feels that the employee is loyal and not the issue, and they suspect that the system may have been compromised with it was connected to another network, such as the employee's home or a hotel network. As such, management would like you to examine the system and determine if it had been compromised, and if so, how and when.

Pretty easy, right? Maybe, maybe not. Like I said, often it's relatively easy to find secondary indicators of a compromise...tools that the intruder may have loaded onto the system following the initial compromise. However, it may not be so easy to find the primary indicators of the initial intrusion.

So, when starting such an investigation, what would you want to look for? Well, I'd start by getting an idea of what services and applications are installed on the system. For example, was there a web server (hey, I've seen it!) installed? FTP server? Anything used for remote access? PCAnywhere? VNC? What about applications...any P2P? Also, be sure to check the Windows firewall configuration...I've seen indicators of malware there.

Another aspect to look at is, what is the user's default browser (yes, there is RegRipper plugin for that!)? IE? Well, what files are in their cache (browser drive-bys or "browse-bys" are a big issue)? Any documents? PDF files? JavaScript? Anything that can download or carry executable code? Email? What about Outlook, and any files downloaded as Outlook attachments?

While we're on the subject of IE, index.dat files, and browsing history, don't forget about checking the Default User profile for indications of web browsing history...another quick check but hey, I've found some pretty amazing things here.

An alternative means of analysis is to mount the image and scan it for malware, being sure to see what AV was installed on the system (if any...be sure to check for MRT) and then use something else (PCTools, AVG, Avira...be sure to check licenses), looking for malware, keyloggers, etc. Other quick checks for a better view include checking the hosts file for modifications, the firewall configuration (mentioned above), running tools like wfpcheck, etc. All of this can be done rather quickly, and provide a much more comprehensive analysis than just running AV. From this, you might hope to find some secondary indicators that might at least provide a point-in-time reference, the reasoning being that a secondary indicator would occur after a primary one.

Other areas of analysis include Event Logs, Registry autostart analysis (more of a follow-on to looking into running services), and even analyzing any available crash dump logs for indications of unusual processes.

What I'd really like to do is find a forensic challenge available online that consists of analyzing a compromised Windows system, and provide a walk-through of the methods and procedures used in the analysis. While there are a number of images online for download as part of challenges, the ones I have found involve malicious activity on the system, conducted by the user...I have yet to find one that involves a system being compromised or hacked remotely.

Investigating Breaches

I recently received an email from someone who said that he wanted to learn more about "network intrusion investigations". Seeing this, I asked him to clarify, and he said that he was interested in learning what to look for when someone breaks into a system from the network.

This got me to thinking...how would one go about teaching this subject? What's a better way to see if you really understand something than to sit down and try to communicate a how-to to another person? So I started thinking about this, and it brought up another conversation I had with some other folks...actually, a series of conversations I had over about 18 months. Specifically, I had conversations about intrusion investigations with some guys who went about discovering vulnerabilities and writing exploits. My thinking was that in order for these guys to be able to report a successful exploit using a vulnerability they found, they would have to have a test system or application, and a condition to define success. I won't got into the details of the exchange...what matters here is that at one point, one of them said, "you aren't looking for artifacts of the initial intrusion...you're looking for artifacts of what the bad guy does after the exploit succeeds."

Well, I have to say, I disagreed with that at that time, but for the purposes of investigating data breaches, one of the primary things you need to determine is, was the system in question, in fact, breached? One way to answer that question is to look for indications of "suspicious" activity on the system. Was malware or some means of access or persistence installed? AV scans are a good place to start, but I'd suggest that analyzing the acquired image for indications of AV tools already installed should be the first step. Why is that? How many analysts mount an acquired image and scan it with an updated commercial AV scanning tool? Okay, you can put your hands down. Now, how many check the system or the image first to see that the tool they use wasn't the one installed on the system? See my point? If the AV scanner missed something once, what's to say it won't miss it again?

Anyway, I've already talked about malware detection, so I won't belabor the point here.

Peter Silberman of Mandiant once mentioned that malware is often "the least frequency of occurrence" on a system, and in many instance, the same may apply to an intrusion. As such, the analyst will not be looking for sudden increases in activity on a system but instead looking for very specific data points within a discrete time window. I've created several timelines in which the predominance of file system activity was the result of system or application updates, as well as AV scans run by the administrators. Often, the necessary data points or time window may be established via other means of analysis.

Overall, I don't believe that you can teach one specific means for investigating breaches, and consider that sufficient. Rather, what must be done instead is to develop an overall understanding of data sources, and what a breach "looks like", and then conduct specific analysis from there.

For example, by what means could a system be breached? Well, let's narrow the field a bit and say that we're looking at a Windows XP laptop...what possibilities are there? There are those services that may be running and listening for connections (check the firewall configuration as it may not be the default), or there may be indications of a breach as a result of user activity, such as downloads via the browser (intentional or otherwise), email, P2P applications, etc. What we may end up doing is examining the system for secondary indications of a breach (i.e., creation of persistence mechanisms, such as user accounts, etc.), and working from there to, at the very least, establish a timeline or at least an initial reference point.

Another point to remember about investigations in general is that the more data points that you have to support your findings, the better. This not only helps you build a more solid picture (and eliminate speculation) of what happened on the system (and when) but it also allows you to build that picture when some data points do not exist.

Let's look at an example...let's say that you suspect that the system you're examining may have been accessed via Terminal Services, using compromised credentials. You examine the Windows Services found in the Registry and determine that Terminal Services was set to start when the system started, and other data points to the fact that remote connections were allowed. So your next step might be to look at Security Event Log entries that would show signs of logins...but you find through Registry and Event Log analysis that auditing of login events wasn't enabled. What then? Well, you might think to look for the time that specific files had last been accessed...but if the system you're examining is a Vista system, well, by default, the updating of last access times is disabled. So what do you do?

Again...the more data points you have to support a finding, the better off you'll be in your analysis. Where one data point is good, six may be better. A knowledgeable analyst will know that while some modicum of work will be needed to establish those other five data points, much of it can be automated and made repeatable, increasing efficiency and reducing analysis time.

Perhaps a way to illustrate the overall theme of this post is to look at examples, and that's what we'll be doing in the future. In the meantime, questions and comments are always welcome.

Wednesday, December 23, 2009

Links and Stuff

Practicals
I love stuff that is practical...stuff you can look at, follow along on, and by the end of the hour/day, have a new skill. I received an email this morning from my Twitter account telling me that Jaime Blasco was following me...I checked out his profile and found a link to something he'd written up on analyzing malicious PDF documents, using pdfid.py from Didier Stevens. This is really good stuff, as it can help an analyst narrow down what is perhaps the most difficult question to answer...how was a system infected or compromised? Many times, it's relatively simple to tell that a system had been infected or compromised, but how (and when) that happened can often remain a mystery. In some cases, know what had infected the system can lead the analyst in investigative directions, but most often it seems to lead to speculation. Practicals like what Jaime presented demonstrate how an analyst can narrow down the infection/compromise vector, particularly since we're seeing a great deal of malware that gets on a system as a secondary or tertiary download.

Training
Speaking of practicals and training, Julia King has some great tips on budget cuts and training in a recent ComputerWorld article. Over the years, I've seen folks invest in training that really wasn't useful...immediately or otherwise...to them. This is not to say that the instruction was bad...not at all, there are some excellent training providers out there. However, I've seen people go to courses that teach them how to perform digital forensics using Linux platforms, on Linux systems...and their infrastructure is all Windows. This is simply a matter of bad corporate planning and decision making, and perhaps a lack of availability of the appropriate training when training budgets are available. Training should be specific to meet your needs, and the skills should be used immediately upon the team member's return.

Addendum: For the record, the above statement (i.e., "I've seen people go to courses that teach them how to perform digital forensics using Linux platforms, on Linux systems...") is simply an example that harkens back to an HTCIA presentation I was giving, and in the next classroom was a course similar to the one described. The "people" referred to, in this case, were attendees of the course who commented that the training was very good, but they mostly acquired and analyzed Windows systems. Nothing disparaging was said about anyone, nor about any training provider. Thanks.

IR Planning
With respect to incident response planning, here's a great quote from Julia King's article mentioned above (the quote is from a CISO):
Someday, something bad is going to happen to your company. A laptop may get stolen or data gets stolen or a virus gets inserted into your network. Before you ever get to that point...

See what he says? He's aware that it's not a matter of if an incident occurs, but when. Further the quoted CISO advocates taking steps to be prepared for when an incident does occur.

Reporting
Keith Ferrell over at DarkReading has picked up on the RAM scraper thing previously mentioned in The Register by Dan Goodin. Keith's article has a list of eight things to do to improve the situation specific to RAM scrapers, as well as concerning malware, in general. The biggest thing that's missing, however, is a change in corporate culture. Perhaps the reason why security appears so lax on these systems is because there's no real corporate direction regarding information security.

I mean, think about it. Have you ever analyzed an image acquired from a server...one of the ones, say, in a rack in the data center...and found web browser history and maybe even email files?

Tools
Speaking of servers in data centers, have you seen F-Response TACTICAL? Be sure to check out the Forensic4Cast review of TACTICAL.

Geoff Chappell has a really good site with a lot of great information available. He's done a great deal of detailed research, and has provided some of that free of charge. When I first ran across his site, I found an excellent explanation of the bootstat.dat file found on Vista and above systems, and recently completed a parser for the file. Here's an example of the output:

C:\Perl\forensics\timeline>bsparse.pl -f d:\cases\vista\bootstat.dat
Timestamp : 41995 sec

Entry Size : 64 bytes

Sev Code : 1

Version : 2

Event ID : 0x1

System Time: Fri Jun 19 11:39:41 2009 UTC


Timestamp : 41995 sec

Entry Size : 120 bytes

Sev Code : 1

Version : 2

Event ID : 0x11

Type : 0x0

Path : \Windows\system32\winload.exe


And yes, it also does TLN output:

C:\Perl\forensics\timeline>bsparse.pl -f d:\cases\vista\bootstat.dat -t -s MYSYSTEM
1245411581|BOOTSTAT|MYSYSTEM||Bootstat.dat log file init event

Okay, at this point, you're probably wondering...beyond an academic exercise in binary parsing using Perl, how is this helpful? Well, according to Geoff's site, the bootstat.dat file is the BootManager log file. Right there, that should grab you...anything that's a "log file" might be useful in forensic analysis. A log file initialization event has a time stamp associated with it (as illustrated above), which can be used to correlate additional events, such as from Windows Event Logs (.evtx files), file system metadata, etc. For example, the LastShutdownTime from the Registry would ideally be followed by a bootstat.dat log file initialization event...right?

This is another example of data that is available to a knowledgeable analyst...data that can be used to build a more complete picture of what's going on with respect the system, and the goals of the analysis. And we're not talking hours and hours of work here...we're talking about a few simple tools and a documented process. In fact, collecting data and constructing a timeline from that data can be part of an intake procedure.

Other Stuff
There's another CyberSpeak podcast up and available! Thanks to Ovie and Bret for turning out these podcasts!

Sunday, December 20, 2009

Using RegRipper

Now and again, I get stories from folks who've used RegRipper or the accompanying tools (rip, ripXP) to meet their needs. When that happens, many times I'll ask if I can post their experience and as in this case, I received the "OK":

I would like to share with you the experience of a recent engagement I have done where RegRipper proved very useful. I had to analyze 4 systems and the number of user profiles in each system varied from 6 to 15. For a system I extracted the relevant hive files, renamed them to "username_NTUSER.DAT" and dumped them in one folder. What I wanted was a certain kind of user activity like typed URL, programs opened etc. by all users (on that system) in one file so that I can I can scroll through the file and get a good idea about what happened on that system i.e. what programs have been executed on the sytsem and if I find something interesting then which username does it correspond to.

First I tried using rip.exe with shell wildcard character "*" but I guess RegRipper does not support that. So I wrote a simple perl script and pointed it to the folder where all the NTUSER.DAT files were located. The script ran a particular plugin against all the hive files and then dumped the output to one file. It also added the name of the file before its output to keep track of which username the activity belongs to.

My goal is that the new version of RegRipper will obviate the need to do this sort of thing...that all you'll have to do is mount the image read-only (tools abound for this...), point RegRipper at the mounted image, and let it go.

However, in the meantime, this is one way to handle this sort of issue. No, rip.exe doesn't handle wildcards...sorry about that, but to be honest, I simply cannot keep up with everything people are going to try to do with the tools, and say ahead of time what they will and will not do. More on that later.

Writing a Perl script or a simple batch file is exactly what rip.exe was intended for! I use rip when testing plugins, and ripXP is built on that same functionality. This is also a great example of how automating a previously manual process saves time and effort, without sacrificing accuracy. In fact, accuracy and completeness are maintained, while reducing the resources it takes to perform certain tasks, making the overall analysis much more efficient.

Finally, a reminder to the community...tools like RegRipper are pretty much written in isolation. That is, I originally wrote the tools to meet my own needs, and I update them for the same purpose. Now and again, I do get submissions for improvements, and those get added, depending upon the design requirements. As one guy, I can only do so much. I've said over and over again, if there's a plugin you're interested in, send me a concise description of what you're looking for and a sample hive (emphasis added on "and" because it's not a matter of one or the other). In cases what folks have done this, I've been able to turn around the plugin fairly quickly.

Also, hives with "interesting" stuff are always a nice stocking stuffer, even all year 'round! ;-)

Addendum: I wanted to add a comment I received from David Kovar:

My primary purpose is to get a quick snapshot of how the system was used, but I'll often come back to the report to help guide me in deciding where to go with my analysis. It probably is one of the most heavily used tools in my kit.

Thanks, David!

Saturday, December 19, 2009

The Trojan Defense

The malware did it...not me.

How often do we see or hear of this? I don't see this specific question very often during exams...probably because I don't work in certain environments, per se...but I do get questions peripheral to it, and I do hear the question being asked by folks who do those sorts of investigations.

More importantly, however, is another question...how does an examiner (LE or otherwise) answer that question before it gets asked? The claim that "the Trojan/worm/malware did it" is being made more and more, and it can be a challenge to address this sort of thing...but it does need to be addressed.

The fact of the matter is that there are enough data sources on a Windows system that can provide indications as to whether certain actions (downloading illicit images, collecting/exfiltrating sensitive data, etc.) were performed by malware, or intentionally by a user. These sources may vary in their location and how useful they are, depending upon the case, but by examining and correctly interpreting the preponderance of data (rather than a few selected data points), an analyst can get an overall picture of what happened.

One example is when illicit images appear on a system...did malware reach out and download those images, or did the user install P2P software and then run a specific search for those images? Or did the user install P2P software that then led to a malware infection, and then the malware downloaded the software?

All of these questions can be answered, but what it takes to do so varies from case to case, and cannot be adequately addressed in a single blog post. Suffice to say that a knowledgeable analyst needs to look at everything, and be aware of those things they do not know. By this I mean, do not dismiss the value of Registry analysis simply because you have a deadline and do not feel that you sufficiently understand Registry analysis as a whole, or how it could apply to your case. The same is true for P2P analysis.

Addressing the "Trojan Defense" involves much more than simply mounting an acquired image and running an AV scanner across it. But like I said, that's not something that can be addressed in a blog post. This isn't a sales pitch, but what's really required is training, and regular, continuous interaction with other knowledgeable analysts, so that information and experiences can be exchanged.

DF and Disclosure

The subtitle of this post should be something like, "The disclosure discussion comes to digital forensics" or "Yes, Virginia...IR software has vulnerabilities, too."

For those of you who may not be aware, it wasn't long ago that MS's COFEE was leaked. Based on some of the comments you can find online, this occurred around the beginning of the second week of November. What's the big deal? Is it that this is "LE only"? Well, now that it's been exposed and you have an idea of what's in it...are you a better person for knowing? Or are you wishing you could forget?

Also, something that a lot of folks may not realize...MS is a HUGE company. At one point, I worked for SAIC, and internally folks described the environment not as one big company, but as 400 small companies that could and did compete against each other. MS is similar...don't expect that the left hand knows what the right hand is doing all the time. Remember WOLF? There've been other efforts within MS to produce tools and documentation, but it's not as if these have been released the way Windows 7 or MS Office are released. Tools like COFEE should be considered more of a grass-roots effort, and not something you're going to see Bill or Steve on stage endorsing.

Comparing COFEE to what's out there now and has already been in use may be a bit like comparing the GI Joe accelerator suit to the real-world MOPP gear, in some ways. You'll have to excuse the analogy...we in the "community" never seem to see eye-to-eye when it comes to this sort of thing. I'm simply trying to illustrate the point that while COFEE was released by MS to LE, that doesn't mean that it's entirely bleeding edge stuff.

Then, along came DECAF, reportedly a counter-intel, anti-forensics tool meant to detect the use of COFEE, and automatically react by doing things on the system, much like an anti-virus or -spyware utility would do.

Ovie then had the opportunity to interview Mike, one of the authors of DECAF. One of the most striking things about the interview is that Mike really didn't seem to have a well thought-out rational behind the creation of DECAF. For good or for bad, MS released something to assist LE, and act as a force multiplier...as a former Marine, I totally understand that. Regardless of what you actually think of the tool itself, the idea was to give LE a capability that they did not have before; specifically, officers that are first on-scene would have a tool that would allow them to collect data (which could be used as evidence) prior to spoilation due to temporal proximity to an event (that means, get it now, before it's gone). Mike apparently had an issue with the tool, and found what he determined to be a shortcoming or "vulnerability", and produced DECAF to "exploit" that shortcoming. Interestingly enough, I've listened to the podcast twice now, and I'm at a loss...Mike never said exactly what the shortcoming or vulnerability was, other than the tool could be "fingerprinted". But isn't that true for...well...just about anything?

However, what was he really doing? He wasn't so much exploiting a software "vulnerability" as he was exploiting the training (or lack thereof) of the first responders who would use COFEE. Ovie had an excellent point...what happens if a really, really bad guy had DECAF installed, and the first responders only had COFEE? There's a possibility that the bad guy could go on hurting children due to the fact that specific evidence could not be collected. If you listen to the podcast, I really think you'll see that regardless of the example used, Ovie made a very valid point, and it really seemed that Mike maybe hadn't thought all of this through.

Mike's primary issue with COFEE throughout the interview seemed to be that it could be fingerprinted...that there were automatic means by which someone could determine that COFEE was being run on a system. Okay...but isn't that true for just about ANY software? I don't know the inner workings of DECAF, but couldn't the same thing be said for other responder tools? According to an article in The Tech Herald, it appears that COFEE's primary purpose is to automate the use of 150 (wait...150?!?) tools, some of which include tools available from the MS/SysInternals site.

How many other responders use these tools?

How many other toolkits could possibly be affected by tools such as DECAF?

Consider this...why COFEE? Why not Helix? Why not WFT? Why target a tool released by MS, and not one, say, endorsed by SANS? I'm not going to speculate on those...those are for tool authors themselves to consider.

I have to say that before I actually listened to the CyberSpeak podcast, I found out that as of Friday morning, the DECAF tool had been taken down, the web site changed, and reportedly, anyone using DECAF on a system with an Internet connection would find that the tool had phoned home and self-destructed. Kind of cool, eh...an anti-forensics tool with the ability to phone home and get updates and instructions, with no user interaction beyond turning the system on...wait, that sounds kinda like a 'bot to me...

Following right along with the CyberSpeak interview, you can watch this YouTube video, as well, that pretty much reiterates what was said on the DECAFME.ORG web page. Speaking of which, here's a quote from the page:

As a security community at large, we need to band together and start relieving some of the burden off our government by giving back.

I agree with this 100%...and I agree that two guys did have an impact. However, from the interview, Mike seems like a really smart guy, so I have to ask myself...if you see something wrong, why not fix it? If you see an issue with a tool or process, why not do something to improve it? If you find an issue with a tool, particularly one aimed at assisting LE or others with their job, why not fix it? If you have a better tool or process and want to get it out there, consider Facebook, Twitter, blogging, and even contacting guys like Ovie and Brett. If the issue is the training received by LE, remember that a lot of LE is really strapped for cash...again, contact Ovie, or your local HTCIA or InfraGuard chapter and see what you can do.

My point is, two smart guys can have an impact, but their motives will dictate the impact that they have. This wasn't an issue like what's seen in the usual "full disclosure" discussions...we're not talking about server software that's been deployed across the globe that has a severe vulnerability. We're talking about a tool that can and should be used by a discriminating, knowledgeable responder...and if you don't think that the tool is sufficient, do something to fix it...other than subverting it. Provide something better.

Thoughts?

Thursday, December 17, 2009

When a tool is just a tool, pt II

Okay, this is part II for this post, because I posted an awesome rant to a thread in one of the forums, and I wanted to include that here, as well, because it kind of applied...and it's my blog, I can do what I want. ;-)

The thread can be found here, and the post I'm referring to is on the third page, in response to someone mentioning, "...don't forget that if you wind up working civil or criminal cases your tools are going to get challenged in court. Open source tools are more vulnerable to attack by the opposition than commercial tools that are standardized."

My rant, if you want to call it that, had to do with what I see as a gross misconception with respect to court cases; specifically, some commercial tools are used primarily because the analysts themselves are familiar with them, and perhaps as a result, the players in the court system have also become familiar with them. That is to say that some commercial tools are recognized within the court system, and therefore, not a great deal of additional explanation is required.

As such, it isn't the tools that are challenged in court...it's the analyst and their processes.

Also, I think another huge issue that doesn't appear to be considered when folks are making statements such as the one I quoted above is that an analyst just doesn't decide one day to walk into court, take the stand, and testify. It just doesn't happen that way.

Instead, the attorney you're working with or for (prosecution or defense) is gonna want to know your answers before he asks you questions on the stand, and the fact that you're testifying and what you're testifying about are going to be part of the discovery process...so the other side is going to have a chance to cross-examine you. As such, if there's anything that would lead the attorney you're working with to suspect that you being cross-examined would sink the case, they're not going to put you on the stand. The same is true if he or she simply doesn't feel that the results of your analysis are pertinent to their case.

With me so far? I guess what I'm saying here is that there's a heck of a lot that goes on before an analyst ever gets to the point of approaching the stand in a court of law.

Now, can we agree that an acquired image is nothing more than a stream of bits, 1s and 0s, in a file on a disk? If we can agree to that, and if the integrity of that data, that stream of bits, can be verified and validated, then why does it matter what tool I use to extract data? What does it matter if an analyst determines that an illicit image is in the image using some commercial tool's Gallery View, or by mounting the image read-only with ImDisk and viewing the image file through Windows Explorer? Regardless of the tool used, the image was there, and that doesn't change. The same is true with other data...credit card numbers, other sensitive data, etc. One tool doesn't necessarily magically make it visible where some other free and/or open source tool wouldn't be able to extract the same data.

Now, don't get me wrong...I'm not against using commercial tools. I've used them myself (and I'm seeking therapy...just kidding) when the need has arisen. But the fact of the matter is that commercial forensic applications are just like any other tool, with their own inherent strengths and weaknesses. In some cases, I've found that processes using open-source and free tools, such as timeline creation tools, have allowed me to structure data for analysis in ways not possible through the use of commercial tools. In other cases, I've found short-comings in using commercial tools, just as I've found short-comings in using open-source and free tools. That doesn't mean that commercial tools shouldn't be used...it just means that all tools should be considered just for what they are...tools.

What should matter most is the process used and documentation created by the analyst. If you thoroughly document what you've done, then why shouldn't you be able to testify about it on the stand, regardless of the tools used? I know a few analysts who've documented their work such that someone else (i.e., LE) could validate their findings via commercial tools (because that's what the LE analyst was most comfortable with) and then testify about the "findings".

So, what do you think? Are open-source tools "more vulnerable to attack"? Why does it matter if I extracted a Registry hive file from an image, and then extracted the LastWrite time from a specific key using a Perl script? Or a hex editor? Or if someone else did the same thing, but through EnCase or FTK? The fact of the matter is that if you go to that location on the disk or within the hive file, extract 64-bits, everyone who does so should arrive at the same answer...right?

Or should I just curl up in the fetal position in the corner of my office, and rock myself to sleep, chanting, "I'm a pretty girl" over and over again?

When a tool is just a tool, pt I

A tool is just a tool...that's it. A tool by any other name would smell so sweet...no, wait...what? Who let Willy Shakes in here? ;-)

I know what you're thinking..."fount of wisdom, dude." Sweet. But think about it...think about what we do, and how we do it. If you hear someone say, "yeah...I do forensics. But I need <insert commercial tool name here>", then back away slowly, don't make direct eye contact and make no sudden movements.

It's long been known that subversive tools and techniques, colloquially referred to as "anti-forensics" tools, haven't been directed at subverting other tools...no, tools such as timestomp aren't meant to subvert EnCase or even NTFS. What's being targeted here is the analyst and their training.

Not sure what I mean by this? Check out Simon's post over on Praetorian Prefect that discusses, in part, that whole COFEE/DECAF yawnfest. Simon had a previous post that addressed COFEE and some of the hype surrounding the tool set being "leaked".

When you really think about it, DECAF is meant for one thing...to subvert the use of COFEE. If the responder is a one-trick pony, and ALL they have is that COFEE package...game over, and DECAF has...no, wait...I wasn't gonna say "done it's job". No, what I was gonna say was the DECAF has demonstrated the shortcomings inherent to types of responders that rely solely on the use of one tool, such as COFEE.

Have you ever heard, "...I ran these tools, some of them didn't work...but here ya go."? I was working a fairly (in)famous engagement back in Dec '06 and one of the analysts from the primary on the contract ran the Windows tools from Helix 1.9 against a live system. I was handed the data the next day, and found that a little over half of the tools didn't have output from the tool available...they just had that "XXX is not recognized as an internal or external command" message. In this case, the issue wasn't due to something being loaded on the system, rather it was an issue of someone really knowledgeable in Linux trying to do IR on a Windows system. The one tool they had didn't work...but the system could ONLY be accessed at 2am, and the analyst had no idea that anything had gone wrong. So, even though I was tasked with doing "emergency" IR, I (and everyone else) had to wait another 24 hrs to get the data we needed.

Okay, that was three years ago...but in a lot of ways, for a LOT of responders, this really hasn't changed too terribly much. Take a malware detection gig, for example...someone has a system that they think has malware on it, so the responder acquires an image of the system, and maybe memory. They take the data back to the lab, in-process the data, then mount a working copy of the acquired image read-only and scan it with one AV scanner...and find nothing, and that's ALL they do, and they report their findings. But wait...did they check to see if the scanner they used was the same one already installed on the system? If it is, what have they really done at that point? What about the fact that a great deal of malware (depending upon what you read and who you believe, this could be as much as 40-60%) isn't detected by commercial AV the first couple of months that it's in the wild?

Are you beginning to see my point here? Look at Conficker...remember Conficker? One of the things I found most interesting about it was that it took advantage of standard business processes (ie, file shares, thumb drives) to spread on internal networks. Imagine responding to a customer site and telling them, "you're gonna have to shut down your file servers until we get this cleaned up." What the...?? A great many calls came in over the next couple of months as organizations with up-to-date AV scanning engines and signatures got p0wnd'd by variants of Conficker, Virut, and other nastiness. That's right...variants. As in, stuff that the AV couldn't see, but eventually got classified as being pretty close to the stuff that the AV could see...just not close enough for it to see it. See what I mean? Not the stuff the AV could detect, but the new stuff...stuff that did the same thing as the other stuff, but just "looked" different.

The problems and p0wnage that folks faced with this stuff had everything to do with the fact that they relied on a tool, rather than a process, to protect them. "Hey, I've got AV in place...I'm good." Who said that...the dudes from AIG? If your fallback plan was to call in an IR team...well, the malware continues to own you for 72 hrs or more as you go through contract negotiations, waiting for analysts to arrive, and finally getting them spun up on your infrastructure and the situation.

Tuesday, December 15, 2009

Incident Prep, part deux

I've mentioned incident preparation before, and recently blogged on best practices, which was sort of a result of one of the comments to my first Incident Prep blog post. Specifically, the comment stated:

After having read the title I was really looking forward to this post, hoping I could compare our "CSIRP" against one you suggest - or at least making sure we've covered all the "must dos" and avoided the common traps...

So, this is that follow-up post...or one of them...that Anonymous asked about.

First, a couple of things...I can't suggest an online CSIRP example, per se, because there aren't many posted out there. However, I know absolutely NOTHING about Anonymous's infrastructure, so how can I recommend anything? Really? Ask me to recommend a good lunch, I know a wonderful carnivore-heaven/BBQ place nearby...but you may get there and say, "hey, I'm vegetarian!" Dude...you never said anything.

To address the other questions...again, knowing NOTHING about Anonymous's infrastructure or organization...the "must dos" are to have a CSIRP, and the common traps are to not have a CSIRP.

If you have a CSIRP and need a gap analysis, or if you need a CSIRP, here are a couple of things I can suggest to get you started...but again, these are all really dependent upon your organization, it's internal structure, as well as geographic, political and cultural considerations:

1. Is your organization subject to any legislative or regulatory oversight? Think HIPAA, PCI, etc. If so, start there. The PCI DSS v1.2, paragraph 12.9, has some information that specifies what a CSIRP should encompass, and is a good place to start...actually, for just about anyone, not just organizations subject to PCI. But the standard caveat applies...if it's associated with the PCI DSS, it's only a start.

2. CERT.ORG has an entire section on CSIRT Development.

3. Look at what you already have in place. Do you have a disaster recovery/business continuity plan? How about response plans for medical emergencies or in the event of workplace violence?

4. Keep in mind that a CSIRP is a communications plan, outlining who does what and when, in the event of an incident. Consider this...during CPR training, trainees are told to point to someone and say, "Call 911!" If there's a crowd gathered 'round and you just say, "would someone call 911?", what're the chances someone will do so? It's a good bet that everyone in the crowd will be thinking that someone else will do it...so you need to designate response staff. Think Star Trek, only without the dudes in red shirts.

Finally, and perhaps most importantly, if you're considering hiring outside consultants to assist with CSIRP development, you have to understand that this is NOT a drive-by service. What this means is that you cannot expect to hire someone to come in and drop off a CSIRP for you...at least, not without spending 6-9 months as a member of your staff. Why is that, you ask? Consider what I've said in previous posts...what is considered best practice for one organization may be completely undoable to another. It takes work, but ultimately your organization will have to take ownership of the CSIRP...after all, it's yours; your data, your infrastructure, your CSIRP.

Monday, December 14, 2009

Link-idy link-idy

Tools
Claus (no, I'm not saying that Claus is a tool...) has posted a nice summary of some of the free tools posts from WindowsIR and added some really nice grep tools for the Windows platform. Using tools like this would greatly increase an analyst's capabilities with respect to some of the Analysis stuff I've mentioned recently. For instance, if you're interested in doing searches for PANs/CCNs, you can use BareGrep and consult the Regex reference for syntax in writing your regex; of course, additional checks will be necessary (in particular, the Luhn formula) to reduce false positives, but it's a start.

Memory
JL's got a new Misc Stuff post goin' on over on her blog. I'd picked up on the demise of support for mdd.exe from her blog post, as well as from Volatility (note the recommendation to use Matthieu Suiche's windd for your memory dumping needs). She also points out that MHL has some new and updated Volatility plugins, even one that incorporates YARA for scanning of malware!

Analysis
Speaking of malware analysis, Kristinn posted on PDF Malware Analysis over on the SANS Forensic Blog, as well. Add this information to Lenny's cheat sheet, and you can develop a pretty decent approach for determining the method by which a system was compromised or infected. This is often one of the most difficult aspects of analysis, and very often left to speculation in a report. Many times, the assumption is made that the initial infection/compromise vector was via web browser "drive-by" ("browse-by"??)...I say "assumption" because there is no specific data or analysis within the report which supports the statement. In most cases, the assumption is based on malware write-ups provided by AV vendors; this may be an initial indicator, but the analysis of identified files using techniques such as those identified by Kristinn and others, and listed in Lenny's cheat sheet, can provide definitive answers, and potentially even identify sources and additional issues.

Anti-Analysis
Remember the hoopla surrounding MS's COFEE being leaked? Well, check out DECAF. I wonder how this will affect other response toolkits and techniques that are similar to COFEE? Check out this post from the Praetorian Prefect blog on DECAF (thanks to Claus for the mention of the blog...).

Friday, December 11, 2009

Some New Stuff

F-Response
In case you hadn't noticed, Matt posted recently that F-Response works on Google Chrome OS! Very cool! That's another one for Matt, and in this case, he wasn't even trying!

CheatSheets
From the SANS Internet Storm Center, I found an interesting link on cheatsheets for analyzing malicious documents. This took me to Lenny Zeltser's site where he not only has the info up on a web page (click links to get the tools), but also PDF and DOCX versions of the sheet. Lenny also hosts a series of cheatsheets, some of which look quite useful.

PodCasts
In addition to the CyberSpeak and Forensic4Cast podcasts, Andy and friends have come out with Episode 0 of the Southern Fried Security podcast! The podcast faded in and out several times, but I have to say that I really enjoyed the fact that the guys discussed the business end of patch management...too often this sort of thinking is missed by purely technical guys, whether during incident response activities or pen tests.

In The "News"
I usually don't follow The Register, but I did see something come across a list recently that talked about "RAM scrappers" which, of course, caught my eye. A couple of things struck me as interesting about this article, in large part due to the fact that when I was working for a QIRA-certified organization conducting PCI forensic assessments, I (and others) saw incidents involving this sort of malware. These types of incidents involve extremely targeted attacks, as one of the malware components dumps the virtual memory of several specifically-named processes...which, from the card swipe at the P0S terminal through authorization and processing, is the only place where the track data is unencrypted. One of those things that isn't quite right in the article, however, is that you won't expect to find Perl scripts on the system, as Windows doesn't ship with Perl installed...what you'll find is a Perl script (used to parse the virtual memory dump for track data) "compiled" using Perl2Exe with no switches...that last part will make sense to anyone who is involved in IR/DF and has some knowledge of Perl2Exe.

It appears that at least some part of the article was based on this Verizon Business data breach investigations supplemental report (thanks to Jamie for the link!)...drop down to page 20 and you'll see what I'm referring to. Notice that the Case Example section lists four file names...three EXE and one BAT...and the Indicators section refers to "perl scripts". If this were the case, one would think that there would be at least one file listed with a .pl extension.

Don't get me wrong...I'm not knocking the article or the report at all. I think that this kind of thing is great to see, but I do think that to the discerning eye, this sort of thing does open up from some questions, as well as an opportunity for sharing...not only amongst analysts, but also with LE.

'Zine
Speaking of sharing, let's not forget the ITB...the Into The Boxes e-zine coming out in the not-too-distant future!

Monday, December 07, 2009

Plugin Browser - New RegRipper Tool

One of the things I've noticed since I first released the tools with RegRipper was that I thought I had a great idea by providing functionality in rip.pl/.exe to create a list of plugins with details, including which hive each plugin is intended for, the version, etc. In fact, I included the ability to create a listing of plugins in .csv format, so you could open the list in Excel.

To do this, you'd use the command line:

C:\tools>rip.pl -l -c > plugins.csv

However, I don't think that this really caught on. Over the past year or so, I've received requests for plugins that are already part of the currently available distribution.

However, the current version of rip.pl (i.e., released with RegRipper v.2.02) relies on the fact that the plugins directory (i.e., the directory where you keep your plugins) is set and stagnant, and that you don't change it. In the new version of RegRipper, I'm supporting a user-configurable plugin directory setting.

So this is a neat little tool that came out of my work with some different UI things I have been working with and trying lately. In putting together some of the new RegRipper tools, I wanted to have something of an updated user interface; I'm programming the UI using Win32::GUI. The Plugin Browser tool came out of the desire to try some of the things I found, as well as just wanting to have something a little different.

Addendum: Based on Adam's comment to the initial post, I updated the tool to include the ability to create plugins files. As you can see in the second image, I added tabs to the UI, one of which allows you to select plugins from the list on the left, add them to the list on the right, and then click the Save button (notice the tooltip for that button visible in the image). As the plugins file will be using plugins from a particular directory, the plugins file will be saved in that directory...otherwise, the RegRipper tools won't know where to go to get the actual plugins!

One of the cool things about this is that as you're building your plugins file, you can go back and forth between the Browse and Plugin File tabs, looking at what hive the plugin is for and what it extracts, then adding it to your plugins file.

Once you save the plugins file, the content of the listbox on the right will be cleared.

Pretty cool stuff, eh?

Friday, December 04, 2009

Best Practices: What is 'best'?

I know what you're thinking..."What?!?"

As a consultant, many times I will be asked, "what are 'best practices'?" or "can you give us 'best practices'?" Well, I've often thought...what ARE 'best practices'? When I look around at best practices for various things...CSIRPs, IR, etc....I find that many times, the publicly available best practices are based on a particular technology or a particular corporate culture, and will simply not be best or even work at all for a particular customer environment.

Here's an example that I look at, because I have seen it so many times...you're a corporate security person, tasked with some level of incident response. You work for the CISO, and the corporate IT department (which includes the SOC) is nowhere in your management chain. The data center is either in another part of the building, or in another building nearby. During IR activities, you may need to obtain data (could be anything in the IR triad...network captures, data from devices, or host-based data, which includes memory and images) from systems managed by the SOC.

What are best practices for this type of situation? Truth be told, you can come up with any best practice, and I can come up with at least one real-world example or reason, and possibly more, why it simply won't work.

So, one best practice might be to immediately go to the data center, take the affected system off-line, and acquire an image of the hard drive via a write-blocker. As the IR guy, do you have access to the data center? In many cases I worked, this would be a big "no". Can you take the affected system off-line? Again...no. The system may be critical, may be RAID, or may be boot-from-SAN. Do you have money to purchase write-blockers? Again...in many cases, no.

The next approach might be to cultivate relationships with the SOC staff and get them to help you. This works sometimes, but it's usually best effort, not best practice. Does the SOC staff have the ability or training to assist you? Sure, plugging in an external HDD and running FTK Imager to obtain a live acquisition might sound easy, but try doing it when it's not part of your job, you aren't trained for it, and your manager/boss is screaming at the team to get stuff done that's already overdue.

Okay, what about pre-deploying a solution? That might work, but remember, you don't own the systems in the data center, so installing some sort of agent, not to monitor, but to allow you to collect data might not be something that you can do...at least not easily. And we all know what happens when the tasks pile up...anything related to security gets popped off the stack.

I once supported an organization in which the CIRT had NO access to anything in the infrastructure...they didn't even have admin access to their own systems. IF something was flagged and IF the CIRT received notification of it, then they had to ask the NOC staff to collect information from the affected system(s). This amounted to running a single tool against the systems...not a batch file of tools, just one single tool...and more often than not the data wasn't returned in anything close to what you might consider timely.

Sometimes, best practices don't turn out to be usable practices at all. When considering best practices, we need to look beyond just the technology and take geographical, political, and cultural considerations into play.

Wednesday, December 02, 2009

Linkaliciousness

Timeline Stuff
Don has put together something very interesting with respect to timeline creation called System Combo Timeline; it looks like he's added quite a degree of automation (following artifact extraction) to the whole process of creating timelines (in TLN format) on Windows systems.

Download syscombotln and smell what Don's been cookin'!

Analysis
Speaking of Don, he and I were chatting the other day, and during the course of our discussion, we covered an analysis technique. Now, I have no doubt that this technique is NOT new, and I am sure that there are folks out there who've used it in one form or another...however, I wanted to present it here in the hopes that someone would see it and use it, or add to it. So, starting with an acquired image from a Windows system, do the following:

1. Extract unstructured data (i.e., pagefile, unallocated space via blkls, etc.).

2. Parse the output with strings (the version from MS includes a -o switch that outputs the offset of each string within the file).

3. Search the output of strings for specific items; indications of commands being run (wget, etc.) and/or other Windows artifacts. This can be run on Windows using a keyword list in a file, via findstr (see the /G switch).

4a. Using your scripting language of choice, read in the output of strings and when you find a "hit of interest", access the original data at the offset provided by strings, and extract either X bytes on either side of the hit, or X bytes from the hit going forward; now, you've got something of a search hit preview capability (a la EnCase).

4b. Using your scripting language of choice, process any discovered Windows artifacts based on their structure; I've seen hits such as nk and hbin (Registry keys and hives, respectively), as well as LfLe (event records). We know how to process the structures, so this is pretty straightforward.

5. You can also carve the unstructured data for files (via PhotoRec, scalpel, or foremost) as well as indications of Internet activity (i.e., Internet Evidence Finder).

Like I said...this is nothing new, I'm sure. I just wanted to put it out there so that others could see it, and perhaps provide their own take, or even add to it. I've run this technique on a test image, and found entire pages of source code, as well as some nifty artifacts.

Document Metadata
Something cool from the SANS Forensic blog was the ability to pull VB script macros from Office documents, using OfficeMalScanner. This looks like an excellent means by which you can investigate an intrusion, particularly if you find Office documents in web browser download or email attachment directories.

Other useful tools for examining Office document metadata include OffVis (pre-Office 2007) and cat_open_xml.pl from Kristinn. Note that Windows 7 uses the OLE format for a number of file types, including Sticky Notes, so OffVis will be very useful.

RegRipper
It seems RegRipper has gained popularity around the world! Pedro from Spain found RegRipper interesting enough to post about...thanks, Pedro! Gracias!

Speaking of RegRipper, Mike Tarnawsky sent me a plugin he'd written to extract the Internet Server Cache from the Office Registry keys in the user's hive. This plugin (oisc.pl) is posted in the RegRipper.net forum, as well as in the Win4n6 Yahoo Group Files section. Mike did a great job putting it together and even provided references in the header of the plugin that help the analyst to understand why the data is being extracted, and how it can be important to an investigation. Thanks, Mike!

Sunday, November 29, 2009

Incident Preparation

In the course of my work, I will often encounter a customer's computer security incident response plan, or CSIRP...often, not always. In some cases, it may be that the customer had a CSIRP, and simply wanted validation of the plan and their processes, or a gap analysis. However, in most cases, responders such as myself encounter a complete lack of a CSIRP all together, which is an indicator that the organization we're assisting was not prepared for an incident at all.

In the end, the true impact of this fact is on the customer themselves (which, in may cases, may be passed on to their customers), who may have already been subject to an intrusion/compromise, and may be facing notification costs, fines...and maybe more.

The Art of Preparation
No, preparation for incident really isn't an art...I just wanted to get you to read further. The fact is, we all know what incident preparation is and consists of, because we do it all the time. An easy example of incident preparation is when we notice that the fuel gauge in our car is nearing the big "E". We anticipate a potential incident (i.e., running out of gas) was we take steps to prepare and mitigate the risks associated with an empty gas tank...we go to the gas station and fill up.

How about those of us who live in states where it may snow? We anticipate the risks associated with a driveway covered in snow, and we prepare to mitigate those risks; we get shovels, maybe some driveway de-icer compound, make sure we have a scraper available, etc. These aren't all the steps we may take...it really depends on where we live, and how willing we are to prepare.

So let's say someone lives in Maine or Minnesota, and makes a habit of NOT having a shovel, de-icer, tire chains, a full tank of gas, etc? Is this person prepared? Given that they're in a state with a high probability of it snowing, wouldn't it be prudent to take the necessary steps to prepare for an incident that has a high likelihood of occurring?

I would suggest to you that if your organization uses any sort of computing resources, the likelihood of you having (or, having had) a computer security incident of some kind is akin to that of it snowing in Maine in the winter months (and I know that this year, it already has!)...that is, the probability is rapidly approaching certainty. So why not be prepared?

Temporal Proximity
This is a term I heard a friend of mine use several years ago, and because I like stuff like Star Trek and the SciFi Channel, I keep it in the back of my brain housing group, ready to bring forth and assault my readers with. Oddly enough, it has a purpose here...that is, the closer proximity to the incident (with respect to time) that you begin collecting information and containing that incident, the greater your ability to really understand what's going on and address the issues of the incident. I'll use an example to illustrate what I mean...actually, a combination of several examples: a "victim" organization is notified of a breach of data by an outside third party, fully three months after the breach occurred. After about a week of trying to understand what could have happened, a responder such as myself is called in to assist. At that point, logs have rolled over and not been saved, systems have been taken out of service and reprovisioned, and IT staff is so busy that they can't remember what they had for breakfast, let alone what happened almost four months previously.

Another good example (by good, I really mean "seen often", not that the issue itself is good) is adding to the temporal dispersion by having relatively untrained staff conduct an "investigation" into the incident. By this point, systems have been scanned and rebooted (sometimes several times), patches installed, and again, some systems may have been rotated into or out of service. At this point, there is so much time (temporal dispersion) and activity between when the incident occurred and when any really meaningful steps are taken to respond to the incident, that the actual response activities are close to futile.

Consider an episode of your favorite variation of CSI, and let's say a crime occurred in a residence; if there is nothing mentioned about the crime for three years, and in that time, the residence has been burned to the ground, the structure completely razed and carted off to the dump, and a commercial structure built up in it's place, how is Grissom or Mac Taylor gonna to solve the crime?

Key Elements
Some of the key elements of Incident Preparation are your CSIRP, an understanding of your infrastructure (in particular, where your critical assets/data are located), and instrumentation. Without instrumentation, you have no visibility into what's happening within your infrastructure. Guys in submarines don't troll around the ocean depths without some sort of ability and instrumentation to determine where they are and what's going on around them. Instrumentation gives you visibility, and as such, decreases temporal proximity, particularly for intrusions or incidents of sensitive data leakage/theft.

Wednesday, November 25, 2009

More Timeline Creation Techniques

Some of you may have seen (or be using) the timeline tools I released within the Win4n6 Yahoo group and included in my most recent Hakin9 article on Windows Timeline Analysis. If you've taken a look at the tools, you'll notice that I have some tools available for parsing Event Logs from Windows 2000, XP, and 2003 (i.e., .evt files) into the timeline (TLN) format I use. However, there's nothing there, at the moment, for parsing Windows Event Log/.evtx files from Vista, Windows 2008, or Windows 7.

A quick look around showed me that there weren't many free (as in beer) tools for parsing .evtx files. Andreas Schuster has done a good deal of work in this area, has picked apart some of the .evtx data types, and made some tools and an article on the subject available. However, these tools are somewhat limited due to the nature of the new .evtx file format.

Then along came LogParser...freely available from MS and extremely flexible. There are a number of sites available that are dedicated to or include the use of LogParser, and there's even a GUI or two available. However, that's a bit beyond what we're going to talk about at the moment.

You can download the Logparser.msi file from MS and install it, and then copy the files from the installation folder to a thumb drive or CD, making the tool available for live incident response activities. What this means is that you now have a platform-independent means for extracting event records from Windows systems. Using this command:

E:\>logparser -i:evt -o:csv "Select * from System" > %ComputerName%-system.csv

...will do the same thing on Windows XP as it will on Vista or Windows 2008. Now, you have a nice comma-separated value file that you can open in Excel or parse with Perl, and include the entries in a timeline.

Okay, that's live response...what about post-mortem analysis? Well, it turns out that there's a couple of ways you can go on this issue. The first is to have a VM available for each version of Windows, or at least one on the Windows XP/2003 side, and one on the Vista/Windows 7 side. For example, you can mount or access an image of a Windows 2008 system on Windows 7 system, extract the .evtx files, and use the following command:

C:\tools>logparser -i:evt -o:csv "Select * from d:\cases\System.evtx" > system.csv

At this point, all you need to do is parse the resulting .csv file...Perl works quite nicely for this.

The other option is to use just Windows 2008 or Windows 7 as your analysis platform, and convert the .evt files to .evtx format using wevtutil.exe.

C:\tools>wevtutil epl AppEvent.evt AppEvent.evtx /lf:true

This gives you the ability to parse both .evt and .evtx formats on the same platform. However, if you're primarily interested in producing a timeline of events, the timeline tools from the Win4n6 Yahoo group contain a Perl script that parses .evt files into TLN format, without relying on the API. Also, the timeline tools for parsing .evt files will be able to extract event records that aren't "seen" by the API.

Sunday, November 22, 2009

Even More Linky Goodness...

Tools
I received an email recently that let me know that the latest version of RevealerToolkit is available, a project from a Barcelona security company. The RVT framework is based on Brian Carrier's TSK tools, and even makes use of some of my code to parse EVT files. More information on RVT is available here. Also, be sure to take a look at the user guide, as well.

Remember when I got p0wned by Intel and MS? Thanks to a blog comment, I was pointed to VMLite, which provides an alternative to MS's XPMode, and without the requirement for hardware virtualization in the CPU. This may definitely be something to take a look at, as virtualization can play pretty important role in forensic analysis in a number of ways. Take a look at packages such as MojoPac and Moka-5, for example.

Rob Lee pointed the GCFA mailing list to RE-Google the other day...this is apparently (quoted) a plugin for the Interactive DisAssembler (IDA) Pro that queries Google Code for information about the functions contained in a disassembled binary. Wow, that sounds pretty cool!

Lance has posted another EnScript, this one to locate Limewire download remnants. This may be pertinent if you're looking at a case involving Limewire or just P2P in general.

Speaking of Lance, I've used the images he has made available for his practicals as examples on a number of occasions; these are excellent resources. However, if you want to work with these practicals as raw dd images (rather than .E0x format), you'll need to convert them using something like FTK Imager. But if you want to mount the EWF/EOx format images and access the files within them, you can use mount_ewf, which Chris has talked about. To do this on Windows, you need to follow these steps (from David Loveall, which Rob Lee so graciously provided to the Win4n6 Yahoo group):

1. Extract the mount_ewf files for Windows into a directory
2. Download and install the Visual Studio runtime files, if you don't already have them
3. Download and install ImDisk

At this point, you should just be able to double-click the E01 file, and tell it Open With... mount_ewf.exe. I'll have to say that I haven't tested this as of yet, but if you've got E0x files you'd like to access, but don't want to have to give up additional space in converting it to raw dd format, this may be an option. P2 Explorer (free) and SmartMount (not free) will also allow you to mount EWF/E0x format images.

Memory Collection and Analysis
An engineer at HBGary recently posted a review of Matthieu's windd tool, based on testing against their own FDPro tool. It's an interesting read...take a look. Here's Matthieu's response, along with some personal notes. I think it's good to see, read, and digest both sides of an issue, and this is definitely worth taking a look at.

On the analysis end of things, Jeff Bryner posted about his FaceBook Memory Forensics tool (ie, pdfbook) on the SANS Forensics Blog recently. Jeff's posted about other tools for parsing memory dumps, and I'm sure that you could use the output of the tool you're using (as opposed to pd.exe, as he mentions in the blog post) to obtain similar results. Looking at the code for pdfbook, as well as the other tools that Jeff's made available, I don't see why they can't be run across unallocated space or the pagefile, for that matter. Another thought might be to give the code the ability to do an EnCase-like preview of X number of bytes on either side of the 'hit' that's been located.

While you're conducting IR or memory analysis activities, Didier's done it again and given us all something new to worry about with SelectMyParent! SMP is a proof-of-concept tool to demonstrate that with the right privileges, you can create a process and designate a parent process for that process. So, instead of running Notepad or Solitaire with your privileges, as a child process of Explorer.exe, you can run it as a child of lsass.exe. And yes, I know what you're thinking...so what? Who's really going to use something like this? Perhaps malware authors...

Print Matter
As a side note from Jeff's post, DFM has it's inaugural issue available...this may be something worth taking a look at. I'd like to see how it compares to Into The Boxes...hopefully, there will be more of a supporting role than competitive.

Along those lines, my second article on timeline analysis is now available in Hakin9 magazine. This one is a hands-on walk-through for using the tools I discuss (and make available via the Win4n6 Yahoo group...go to the Files section) to create a timeline for forensic analysis. I mentioned at an ECTF meeting recently that I have used this technique to great effectiveness. In one instance during a PCI forensic assessment, I was able to narrow down the window of exposure by demonstrating that shortly after the malware was first installed on the system, AV detected and deleted it. In that instance, sources of information included not only the file system metadata and Event Log records, but also AV logs and even information derived from Dr. Watson logs...combining these allowed us to demonstrate that while the malware had been installed, it did not appear to be running at certain times (this malware was not a DLL injected into another process). The two big take-aways from the articles should be that (a) timeline analysis allows you to view events from a system (or several systems) in temporal proximity to each other, and (b) when additional analysis support is required, you can ship off the necessary information for a timeline to another analyst without worrying about exposing sensitive data.

You can also download free Hakin9 articles here.

Correction
I was taken to task by an anonymous poster recently regarding what I've described as a 128-bit timestamp. Apparently, this isn't a timestamp, but rather a SYSTEMTIME structure. I had searched for this, and even been asked by someone from Microsoft about it, but neither of us was able to find a link. So, thanks to Anonymous for sharing this. Apparently, I also stand corrected on how prevalent this structure is within the various versions of Windows, although that's still something of a mystery.

Media
Bret and Ovie have a new CyberSpeak podcast posted...check it out.

Wednesday, November 18, 2009

Working with Volume Shadow Copies

To begin with, let me say right up front that most of the information in this post, particularly the latter half, is not something that I developed myself...consider this more of me being a secretary(albeit unpaid) for Rob Lee, Troy Larson, and apparently Jimmy Weg...apparently, these guys all knew about what I'm going to present here well before I started down this road.

That being said, away we go...

Based on something I saw in Troy Larson's presentation at DCC2009 regarding Volume Shadow Copies, I thought I'd try something...I wanted to see if I could mount an image of a Vista system from a Vista system, and access the Volume Shadow Copies within the image.

I started with a Vista Home Edition system and an image of that same system on a USB external HDD. I connected the USB external HDD to the live Vista system and mounted the acquired image with each of several tools. I used ImDisk, SmartMount v1.0.5, the 14 day trial copy of Mount Image Pro, and P2 Explorer.

In each instance I mounted the image as a drive letter, verified that I could access the volume, and ran vssadmin list shadows. In none of the instances did vssadmin recognize the mounted drive as a source of Volume Shadow Copies. Well, I take that back...I didn't even get that far with P2 Explorer...it automatically kicked off its MD5 hashing, and once that was done, reported that the image was corrupt.

Now, Troy had mentioned in his testing that only EnCase PDE will mount an image in a manner through which vssadmin can access Volume Shadow Copies within the image. Okay, well, that's not something I have available at this point.

Now, Troy, Jimmy, and Rob mentioned something in one of the lists recently that seemed interesting...basically, to summarize what was said...if you have a VMWare guest of Vista, for example, and you have an acquired raw/dd image of a Vista system, you can generate at .vmdk file for the image and add it to the Vista VM as a hard drive, and then you can 'see' the Volume Shadow Copies in the acquired image.

So I set out to see if this was something I could replicate. I used ProDiscover to create a .vmdk file for the acquired image (again...Vista Home OS), and I opened VMWare Workstation 6.5. I went to the settings for my Vista Ultimate VM and added the new .vmdk file to the properties for the VM as a hard drive. When I booted the Vista VM and logged in, I could see the acquired image right here as E:\. So far, so good.

I then ran vssadmin, like so:

C:\>vssadmin list shadows /for=e:

Lo and behold, I saw a list of Volume Shadow Copies for the E:\ drive! And yes, the entries for "Originating Machine" corresponded to the name of the system from which the image had been acquired.

The next step was to see if I could create symbolic links using mklink...the short version is that I could, but I could not access them, as I kept getting "The parameter is incorrect" messages. Suffice to say, I even created symbolic links for Volume Shadow Copies from the C:\ drive, and got the same message. It turns out that the issue with mklink is that the trailing \ is absolutely required (something that was also mentioned on the SANS blog). So the command looks like:

C:\>mklink /d C:\shadow \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy18\

With this, you can then run tools such as RegRipper against the hive files, or copy out selected files for analysis (or better yet, just run your tools to collect the information), etc. Once you're done, you can remove the symbolic link with:

C:\>rd C:\shadow

Before I go on, let me remind you...you MUST have a \ at the end of the Volume Shadow Copy in the mklink command.

Moving on, I downloaded George M. Garner, Jr.'s FAU tools and ran the following command:

C:\tools>dd if=\\.\HarddiskVolumeShadowCopy6 of=g:\shadow6.dd --localwrt

HarddiskVolumeShadowCopy6 is one of the identified Volume Shadow Copies from the E:\ drive. I wanted to acquire an image of the Volume Shadow Copy to an attached USB external drive (G:\), hence the use of the --localwrt switch. For about 10 min, I let the command run, and kept running "dir g:\" from another command prompt, and kept seeing that shadow6.dd was 0 bytes. I stopped the imaging (Ctrl-C) and found that the output file was over 2.5GB! So I then re-ran the command, and just let it run...and it will run for a while, as I'm acquiring from a USB ext HDD to a USB ext HDD.

Here's the results of the 'dd' command:

C:\tools>dd if=\\.\HarddiskVolumeShadowCopy6 of=g:\shadow6_2.dd --localwrt
Copying \\.\HarddiskVolumeShadowCopy6 to g:\shadow6_2.dd
Output: g:\shadow6_2.dd
146526953472 bytes
139738+1 records in
139738+1 records out
146526953472 bytes written

Succeeded!

Now that the acquisition is complete, the next step is to verify the acquired image. Opening the image in FTK Imager, I was able to verify that I had a complete, readable file system. At this point, I can do everything with this image that I would with any other acquired image.

Again, let me remind you that this isn't something I came up with...apparently, others have known about this, I'm just writing it down.

Summary
1. Start with a raw/dd image of Vista or above
2. Create a .vmdk file for the image
3. Add the .vmdk as a hard drive to a VM of a like OS (if image is Vista, use a Vista VM)
4. Boot the VM, use vssadmin to locate VSC's on the image drive (or use WMI to get concise info)
5a. Use mklink to 'mount' the VSC's you're interested in, or...
5b. Acquire the full VSC using dd

Resources
Troy's Vista Forensics Slides (one version, anyway)
Shadow Copies on Wikipedia
Shadow Copy Client
Win32_ShadowCopy WMI Class