Tuesday, August 24, 2010

It's those darned DLLs again...

There's been a lot (and I mean a LOT) of traffic on the interwebs lately regarding a vulnerability associated with how DLLs are loaded on Windows systems. In short, the idea is that due to how applications look for DLLs that aren't explicitly defined by their path, and how the current working directory can be defined (includes how an application is launched), the application can end up loading a malicious DLL that is in a non-traditional directory.

First, some background/references:
DLL Search Order Vuln (circa 2000)
MSDN DLL Search Order article
ACROS Security's recent announcement
Recent Mandiant blog post (author: Nick Harbour)
SANS ISC blog post (author: Bojan Zdrnja)
ComputerWorld article
Register article (UK)
Rapid7 blog post (author: HD Moore)
Metasploit blog post (author: HD Moore)
SecDev blog post (author: Thierry Zoller)
MS Research and Defense blog
MS Security Advisory 2269637
MS KB Article 2264107 (includes a Registry change)

This was originally something that was pointed out 10 years ago...the new twist is that it has been found to work on remote resources. MS has issued instructions and guidance on how to properly use the appropriate APIs to ensure the safe loading of libraries/DLLs. However, this, combined with the findings from ACROS and others (see the paper referenced in HD's Rapid7 blog post), pretty much tells us that we have a problem, Houston.

The DLL search order vulnerability is a class of vulnerability that can be implemented/exploited through several avenues. Basically, some mechanism needs to tell an application that it needs to load a DLL (import table, Registry entry, etc.) through an implicit (i.e., not explicitly defined) path. The application will then start looking for the DLL in the current working directory...depending on how this exploit is implemented, this may be the directory where the application resides, or it may be a remote folder accessible via SMB or WebDAV. Therefore, the application starts looking for the DLL, and as there is no validation, the first DLL with the specified name is loaded. One thing to keep in mind is that the DLL cannot break required functionality needed by the application, or else the application itself may cease to work properly.

Okay, so what does all of this mean to forensic analysts? So some user clicks on a link that they aren't supposed to and accesses a specific file, and at the same time a malicious DLL gets loaded, and the system becomes infected. Most often, forensic analysts end up addressing the aftermath of that initial infection, finding indications of secondary or perhaps tertiary downloads. But what question do customers and regulatory bodies all ask?

How was the system originally infected?

What they're asking us to do is, to do a root cause analysis and determine the initial infection vector (IIV).

Some folks may feel that the method by which the DLL is designated to load (import table, file extension association, etc.) is irrelevant and inconsequential, and when it comes to a successful exploit, they'd be right. However, when attempting to determine the root cause, it's very important. Take Nick's post on the Mandiant blog, for example...we (me, and the team I work with) had seen this same issue, where Explorer.exe loaded ntshrui.dll, but not the one in the C:\Windows\system32 directory. Rather, the timeline I put together for the system showed the user logging in and several DLLs being loaded from the system32 directory, then C:\Windows\ntshrui.dll was loaded. The question then became, why was Explorer.exe loading this DLL? It turned out that when Explorer.exe loads, it checks the Registry for approved shell extensions, and then starts loading those DLLs. A good number of these approved shell extensions are listed in the Registry with explicit paths, pointing directly to the DLL (in most cases, in the system32 directory). However, for some reason, some of the DLLs are listed with implicit paths...just the name of the DLL is provided and Explorer.exe is left to its own devices to go find that DLL. Ntshrui.dll is one such DLL, and it turns out that in a domain environment, that particular DLL provides functionality to the shell that most folks aren't likely to miss; therefore, there are no overt changes to the Explorer shell that a user would report on.

Yes, I have written a plugin that parses the Registry for approved shell extensions. It's not something that runs quickly, because the process requires that the GUIDs that are listed as the approved shell extensions then be mapped to the CLSID entries...but it works great. This isn't as nifty as the tools that Nick and HD mention in their blog posts, but it's another avenue to look to...list all of the approved shell extensions listed with implicit paths, and then see if any of those DLLs are resident in the C:\Windows directory.

Again, there's a lot of great information out there and a lot to digest with respect to this issue, but the overarching issue that I'm most interested in now is, what does this mean to a forensic analyst, and how can we incorporate a thorough examination for this issue into our processes? Things that forensic analysts would want to look for during an exam would be the MRU lists for various applications...were any files recently accessed or opened from remote resources (thumb drives, remote folders, etc.)? If there's a memory dump available (or the output of handles.exe or listdlls.exe) check to see if there are any DLLs loaded by an application that have unusual paths.

Some of the issues you're going to run into is that you may not have access to the remote resource, and while you may see indications of a file being accessed, you won't have any direct indication that a DLL was in the same folder. Or, if the vulnerability was exploited using a local resource, AV applications most likely are not going to flag on the malicious DLL. In some cases, a timeline may provide you with a sequence of events...user launched an application, remote file was accessed, system became infected...but won't show you explicitly that the malicious DLL was loaded. This is why it is so important for forensic analysts and incident responders to be aware of the mechanisms that can cause an application to load a DLL.

One final note...if you're reading MS KB article 2264107, you'll see that at one point, "CWDIllegalInDllSearch" is referred to as a Registry key, then later as a value. Apparently, it's a value. Kind of important, guys. I'm just sayin'...

I've written a plugin that checks for the mentioned value, and updated another plugin that already checks the Image File Execution Options keys for Debugger values to include checking for this new value, as well.

Monday, August 16, 2010

Links

Who You Gonna Call?
Tom Liston had a great post to the ISC blog a bit ago entitled Incident Reporting - Liston's "How To" Guide. This humorous bent on the situation simply oozed truthiness. Many moons ago, I was on the receiving end of the "abuse@" emails for a telecomm provider, and marveled at the emails threatening litigation as a result of...and get this...a single ICMP packet received by some home user's firewall. Seriously.

One thing I would add to that, particularly for larger organizations...seek a relationship with professional responders before (because one will...if one hasn't, you simply haven't been paying attention) an incident occurs, and work with them to understand how to best help yourself. What really turns most organizations off to responding to incidents in general is the overall impact to their organization...having people on-site, engaging their staff, taking them away from keeping the email server up...and there's the price tag, to boot. But the problem is that in almost every instance, every question asked by the responders is answered with "I don't know"...or with something incorrect that should have been "I don't know". Questions like, what first alerted you to the incident? Where is does your sensitive data live in your infrastructure? How many egress points/ISPs do you have? Do you have any firewalls, and if so, can we see the logs? It's questions like these that the responders need to have answered so that they can scope the engagement (how many people and how much equipment to send), and determine how long it will take to complete the engagement. To be honest, we really don't want to have to engage your staff any more than we have to...we understand that as the engagement goes on, they become less and less interested, and less and less responsive.

Art Imitates Life
My wife and I watched Extraordinary Measures recently...we both like that feel-good, "based on a true story" kind of movie. After watching the movie, it occurred to me that there are a lot of parallels between what was portrayed in the movie and our industry...specifically, that there are a lot of tools and techniques that are discussed, but never see common usage. Through sites such as eEvidence, I've read some very academic papers that have described a search technique or some other apparently very useful technique (or tool), but we don't see these things come into common usage within the industry.

IIVs
H-Online has a nice CSI-style write up on investigating a "PDF timebomb". I know a lot of folks don't want to necessarily look at (I hesitate to say "read") some really dry technical stuff, particularly when it doesn't really talk about how a tool is used. A lot of available tools have a lot of nice features, but too many times I think a lot of us don't realize that espousing the virtues and capabilities of a tool do little to really don't get...after all, we may have made a connection that others simply don't see right away. So things like this are very beneficial. Matt Shannon cracked the code on this and has published Mission Guides.

So what does this have to do with initial infection vectors, or IIVs? Well, malware and bad guys get on a system somehow, and H-Online's write-up does a good job of representing a "typical" case. I think that for the most part, many of us can find the malware on a system, but finding how it got there in the first place is still something of a mystery. After all, there are a number of ways that this can happen...browser drive-by, infected attachment through email, etc. We also have to consider the propagation mechanism employed by the malware...Ilomo uses psexec.exe and the WinInet API to move about the infrastructure. So, given the possibilities, how do we go about determining how the malware got on the system?

Book News - Translations
I received a nice package in the mail recently...apparently, WFA 2/e has been translated into French...the publisher was nice enough to send me some complimentary copies! Very cool! I also found out that I will be seeing copies of WFA 2/e in Korean sometime after the first of the year, and in a couple of weeks, I should be seeing WFA 1/e in Chinese!

Malware Communications
Nick posted recently on different means by which malware will communicate off of Windows systems, from a malware RE perspective. This is an interesting post...like I said, it's from an RE perspective. Nick's obviously very knowledgeable, and I think that what he talks about is very important for folks to understand...not just malware folks, but IR and even disk forensics types. I've talked before (in this blog and in my books) about the artifacts left behind through the use of WinInet and URLMON, and there are even ways to prevent these artifacts from appearing on disk. The use of COM to control IE for malicious purposes goes back (as far as I'm aware) to BlackHat 2002 when Roelof and Haroon talked about Setiri.

Evidence Cleaners
Speaking of disk analysis, Matt has an excellent post on the SANS Forensic blog that talks about using "evidence cleaners" as a means for identifying interesting artifacts. I think that this is a great way to go about identifying interesting artifacts of various tools...after all, if someone wants to "clean" it, it's probably interesting, right? I've said for years that the best places to go to find malware artifacts are (a) AV write-ups and (b) MS's own AutoRuns tool (which now works when run against offline hives).

As a side note, I had a case where an older version of WindowWasher had been run against a system a short time before acquisition, and was able to recover quite a bit of the deleted Registry information using regslack.

Tuesday, August 03, 2010

Artifacts: Direct, indirect

In reviewing some of the available materials on Stuxnet, I've started to see somethings that got me thinking. Actually, what really got me thinking was what I wasn't seeing...

To begin with, as I mentioned here, the .sys files associated with Stuxnet are apparently signed device drivers. So, I see this, and my small military mind starts churning...device drivers usually have entries in the Services keys. Thanks to Stefan, that was confirmed here and here.

Now, the interesting thing about the ThreatExpert post is the mention of the Enum\Root\LEGACY_* keys, which brings us to the point of this post. The creation of these keys is not a direct result of the malware infect, but rather an artifact created by the operating system, as a result of the execution of the malware through this particular means.

We've seen this before, with the MUICache key, way back in 2005. AV vendors were stating that malware created entries in this key when executed, when in fact, this artifact was an indirect result of how the malware was launched in the testing environment. Some interesting things about the LEGACY_* keys are that:

1. The LastWrite time of the keys appear to closely correlate to the first time the malicious service was executed. I (and others, I can't take all of the credit here) have seen correlations between when the file was created on the system, Event Log entries showing that the service started (event ID 7035), and the LastWrite time of the LEGACY_* key for the service. This information can be very helpful in establishing a timeline, as well as indicating whether some sort of timestomping was used with respect to the malware file(s).

2. The LEGACY_* key for the service doesn't appear to be deleted if the service itself is removed.

So, in short, what we have is:

Direct - Artifacts created by the compromise/infection/installation process, such as files/Registry keys being created, deleted, or modified.

Indirect - Ancillary (secondary, tertiary) artifacts created by the operating system as a result of the installation/infection process running in the environment.

So, is this distinction important, and if so, how? How does it affect what I do?

Well, consider this...someone gets into a system or a system becomes infected with malware. There are things that can be done to hide the presence of the malware, intrusion or the overall infection process itself. This is particularly an issue if you consider Least Frequency of Occurrence (LFO) analysis (shoutz to Pete Silberman!), as we aren't looking for the needle in haystack, we're looking for the hay that shouldn't be in the haystack. So, hide all of the direct artifacts, but how do you go about hiding all of the indirect artifacts, as well?

So, by example...someone throws a stone into a pool, and even if I don't see the stone being thrown, I know that something happened, because I see the ripples in the water. I can then look for a stone. So let's say that someone wants to be tricky, and instead of throwing a stone, throws a glass bead, something that is incredibly hard to see at the bottom of the pool. Well, I still know that something happened, because I saw the ripples. Even if they throw a stone attached to a string, and remove the stone, I still know something happened, thanks to the ripples...and can act/respond accordingly.

Also, indirect artifacts are another example of how Locard's Exchange Principle applies to what we do. Like Jesse said, malware wants to run and wants to remain persistent; in order to do so, it has to come in contact with the operating system, producing both direct and indirect artifacts. We may not find the direct artifacts right away (i.e., recent scans of acquired images using up-to-date AV do not find Ilomo...), but finding the indirect artifacts may lead us to the direct ones.

Another example of an indirect artifact is web browsing history and Temporary Internet Files associated with the "Default User" or LocalService accounts; these indicate the use of the WinInet API for off-system communications, but through an account with System-level privileges.

Thoughts?