Tuesday, January 31, 2017

DFIR Tools, Ransomware Thoughts

I like to keep up on new tools that are discussed in the community, because they offer insight into what other analysts are seeing.  The DFIR community at large isn't really that big on sharing what they've seen or done, and seeing tools being discussed is something of a "peek behind the curtain", as it were.

SRUM
A recent ISC handler diary entry described a tool for parsing System Resource Utilization Monitor (SRUM) data.

As soon as I read the diary entry, I went back through some of my recent cases, but wasn't able to find any systems with resource monitoring enabled.

SCCM
The folks at FireEye released a tool for parsing process execution information from the WMI repository.

I still strongly recommend that some form of process creation monitoring be installed or enabled on endpoints, whether its Sysmon, or something else.

Ransomware
Something else I've been interested in for quite some time is ransomware.  As an incident responder, I'm most often in contact with organizations that have suffered breaches, and these organizations vary greatly with respect to the maturity of their infosec programs.  However, the whole issue of ransomware is not just an annoyance that is part of the price of being part of a connected, e-commerce world.  In fact, ransomware is the implementation of a business model that monetizes what many organizations view as "low-value targets"; because it's a business model, we can expect to see developments and modifications/tweaks to that model to improve it's efficacy over the next year.

Last year, SecureWorks published a couple of posts regarding the Samas ransomware.  One of them illustrates the adversary's life cycle observed across multiple engagements; the other (authored by Kevin Strickland, of the IR team) specifically addresses the evolution of the Samas ransomware binary itself.

The folks at CrowdStrike published a blog post on the topic of ransomware, one that specifically discusses ransomware evolving over time.  A couple of thoughts regarding the post:

First, while there will be an evolution of tactics, some of the current techniques to infect an infrastructure will continue to be used.  Why?  Because they work.  The simple fact is that users will continue to click on things.  Anyone who monitors process creation events sees this on a weekly (daily?) basis, and this will continue to cost organizations money, in lost productivity as the IT staff attempt to recover.

Second, there's the statement, "Samas: This variant targets servers..."; simply put, no, it doesn't.  The Samas ransomware is just ransomware; it encrypts files.  As with Le Chiffre and several other variants of ransomware, there are actual people behind the deployment of the Samas ransomware.  The Samas ransomware has no capability whatsoever to target servers.  The vulnerable systems are targeted by an actual person.

Finally, I do agree with the authors of the post that a new approach is needed; actually, rather than a "new" approach, I'd recommend that organizations implement those basic measures that infosec folks have been talking about for 20+ years.  Make and verify backups, keep those backups off of the network.  Provide user awareness training, and hold folks responsible for that training.  Third-parties such as PhishMe will provide you with statistics, and identify those users who continue to click on suspicious attachments.

With respect to ransomware itself, is enough effort being put forth by organizations to develop and track threat intelligence?  CrowdStrike's blog post discusses an evolution of "TTPs", but what are those TTPs?  Ransomware is a threat that imposes significant costs on (and subsequently, a significant threat to) organizations by monetizing wide swathes of un-/under-protected systems.

Tools

Memory Analysis
When I've had the opportunity to conduct memory analysis, Volatility and bulk_extractor have been invaluable.

Back when I started in the industry, oh, lo so many years ago, 'strings' was pretty much the tool for memory "analysis".  Thanks to Volatility's strings plugin, there's so much more you can do; run 'strings' (I use the one from SysInternals) with the "-o" switch, and parse out any strings of interest.  From there, using the Volatility strings plugin lets you see where those strings of interest are located within the memory sample, providing significant context.

I've run bulk_extractor across memory samples, and been able to get pcap files that contained connections not present in Volatility's netscan plugin output.  That is not to say that one tool is "better" than the other...not at all.  Both tools do something different, and look for data in different ways, so using them in conjunction provides a more comprehensive view of the data.

If you do get a pcap file (from memory or any other data source), be sure to take a look at Lorna's ISC handler diary entry regarding packet analysis; there are some great tips available.  When conducting packet analysis, remember that besides WireShark, you might also want to take a look at the free version of NetWitness Investigator.

Carving
Like most analysts, I've needed to carve unallocated space (or other data blobs) for various items, including (but not limited to) executable images.  Carving unallocated space, or any data blob (memory dump, pagefile, etc.) for individual records (web history, EVT records, etc.) is pretty straight forward, as in many cases, these items fit within a sector.

Most analysts who've been around for a while are familiar with foremost (possible Windows .exe here) and scalpel as carving solutions.  I did some looking around recently to see if there were any updates on the topic of carving executables, and found Brian Baskin's pe_carve.py tool.  I updated my Python 2.7 installation to version 2.7.13, because the pip package manager became part of the installation package as of version 2.7.9.  Updating the installation so that I could run pe_carve.py was as simple as "pip install bitstring" and "pip install pefile".  That was it.  From there, all I had to do was run Brian's script.  The result was a folder with files that had valid PE headers, files that tools such as PEView parsed, but there were sections of the files that were clearly not part of the file.  But then, such is the nature of carving files from unallocated space.

Addendum, 1 Feb: One of the techniques I used to try to optimize analysis was to run 'strings' across the carved PE files, in hopes of locating .pdb strings or other indicators.  Unfortunately, in this case, I had nothing to go on other than file names.  I did find several references to the file names, but those strings were found in the files that were clearly part of the sectors included in the file that likely had little to do with the original file.

Also, someone on Twitter recommended FireEye's FLOSS tool, something you'd want to use in addition to 'strings'.

Hindsight
Hindsight, from Obsidian Forensics, is an awesome tool for parsing Chrome browser history.  If you haven't tried it, take a look.  I've used it quite successfully during engagements, most times to get a deeper understanding of a user's browsing activity during a particular time frame.  However, in one instance, I found the "smoking gun" in a ransomware case, where the user specifically used Chrome (was also using IE on a regular basis) to browse to a third-party email portal, download and activate a malicious document, and then infect their system with ransomware.  Doing so by-passed the corporate email portal protections intended specifically to prevent systems from being infected with...well...ransomware.  ;-)

Hindsight has been particularly helpful, in that I've used it to get a small number of items to add to a timeline (via tln.pl/.exe) that provide a great deal of context.

Shadow Copies
Volume shadow copies (VSCs) and how DFIR analysts take advantage of them is something I've always found fascinating.  Something I saw recently on Twitter was a command line that can be used to access files within Volume Shadow Copies on live systems; the included comment was, "Random observation - if you browse c$ on a PC remotely and add @TIMEZONE-Snapshot-Time, you can browse VSS snapshots of a PC."

An image included within the tweet chain/thread appeared as follows:

Source: Twitter










I can't be the only one that finds this fascinating...not so much that it can be done, but more along the lines of, "...is anyone doing this on systems within my infrastructure?"

Now, I haven't gotten this to work on my own system.  I am on a Windows 10 laptop, and can list the available shadow copies, but can't copy files using the above approach.  If anyone has had this work, could you share what you did?  I'd like to test this in a Win7 VM with Sysmon running, but I haven't been able to get it working there, either.

Addendum, 1 Feb: Tun provided a tweet to Dan's blog post that might be helpful with this technique.  Another "Dan" said on Twitter that he wasn't able to get the above technique to work.

As a side note to this topic, remember this blog post?  Pretty sneaky technique for launching malware.  What does that look like, and how do you hunt for it on your network?

Windows Event Logs
I recently ran across a fascinating MSDN article entitled, "Recreating Windows Event Log Files"; kind of makes you wonder, how can this be used by a bad guy, and more importantly, has it?

Maybe the real question is, are you instrumented to catch this happening on endpoints in your environment?  I did some testing recently, and was simply fascinated with what I saw in the data.

Monday, January 02, 2017

Links, Updates

I thought I'd make my first post of 2017 short and sweet...no frills in this one...

Tools
Volatility 2.6 is available!

Matt Suiche recently tweeted that Hibr2Bin is back to open source!

Here's a very good article on hibernation and page file analysis, which discusses the use of strings and page_brute on the page file, as well as Volatility on the hibernation file.  Do not forget bulk_extractor; I've used this to great effect during several examinations.

Willi's EVTXtract has been updated; this is a great tool for when the Windows Event Log has been cleared, or someone took a live image of a system and one or more of the *.evtx files are being reported as 'corrupt' by your other tools.

FireEye recently published an article describing how they parse the WMI repository for indications of process execution...

Cyber Interviews
Douglas has kept the content coming, so be sure to check out the interviews he's got posted so far...

Web Shells
Web shells are still something very much in use by the bad guys.  We've seen web shells used very recently, and TrustWave also published an interesting article on the topic.

Tenable published their own article on hunting for web shells.