Tuesday, December 30, 2014

What It Looks Like: Malware Infection via a Weaponized Document


Okay...I lied.  This is my last blog post of 2014.

A couple of weeks ago, Ronnie posted regarding some analysis of a weaponized document to the PhishMe.com blog.  There is some interesting information in the post, but I commented on Twitter that there was very little post-mortem analysis. In response, Ronnie sent me a copy of the document.  So, I dusted off a Windows 7 VM and took a shot at infecting it by opening the document.

Analysis Platform
32-bit Windows 7 Ultimate SP1, MS Office 2010, with Sysmon installed - VM running in Virtual Box.  As with previous dynamic analysis I've performed, Sysmon provides not only place holders to look for, but also insight into what can be trapped via a process creation monitoring tool.

Process
Run Windows Updates, reboot to a clean clone of the VM, and double-click the document (sitting on the user profile desktop).  The user profile used to access the document had Admin-level privileges, but UAC had not been disabled.  After waiting a few moments after the launch of the document, the application (MS Word) was closed, and the VM shut down cleanly.

I purposely did not run a packet capture tool, as that was something that had been done already.

Analysis
Initial attempts to view the file in a hex editor caused MSE to alert on TrojanDownloader:O97M/Tarbir.  After opening the file, waiting, and shutting down the VM cleanly, I created a timeline using file system, WEVTX, Prefetch, and Registry metadata.  I also created a separate micro-timeline from the USN change journal - I didn't want to overpopulate my main timeline and make it more difficult to analyze.

Also, when I extracted the file from the archive that I received, I named it "file.docx", based on the contents (the structure was not the older-style OLE format).  When I double-clicked the file, MS Word opened but complained that there was something wrong with the file.  I renamed the file to "file.doc", and everything ran in accordance with Ronnie's blog post.

Findings
As expected, all of the files that Ronnie mentioned were created within the VM, in the user's AppData\Local\Temp folder.  Also as expect, the timeline I created was populated by artifacts of the user's access to the file.  Since the "Enable Editing" button had to be clicked in order to enable macros (and run the embedded code), the TrustRecords key was populated with a reference to the file.  Keep in mind that many of the artifacts that were created (JumpList entries, Registry values, etc.) will persist well beyond the removal/deletion of the file and other artifacts.

While I did not capture any of the off-system communication (i.e., download of the malware), Sysmon provided some pretty interesting information.  I looked up the domain in Ronnie's post, and that gave me the IP address "50.63.213[.]1".  I then searched for that IP address in my timeline, and found one entry, from Sysmon...Powershell had reached off of the system (Sysmon/3 event) to that IP address (which itself translates to "p3nlhg346c1346.shr.prod.phx3.secureserver[.]net"), on port 80.  Artifacts of Powershell's off-system communications were the HKLM/Software/Microsoft/Tracing/powershell_RASMANCS and HKLM/Software/Microsoft/Tracing/powershell_RASAPI32 keys being created.

Per Ronnie's blog post, the file "444.exe" is downloaded.  The file is deleted after being copied to "msgss.exe".  The strings within this file (msgss.exe) indicate that it is a Borland Delphi file, and contains the strings "GYRATOR" and "TANNERYWHISTLE" (refer to the icon used for the file).  The PE compile time for the file is 19 Jun 1992 22:22:17 UTC.  The VirusTotal analysis of this file (originally uploaded to VT on 12 Dec) can be found here.

Persistence Mechanism:  User's Run key; the value "OutLook Express" was added to the key, pointing to the msgss.exe file.

An interesting artifact of the infection occurred at the same time that the msgss.exe file was created on the system and the Run key value created so that the malware would persist; the key "HKCU/full" was created.  The key doesn't have any values...it's just the key.

To extend Corey's discussion of Prefetch file contents just a bit, the Prefetch file for WinWord.exe included references to RASMAN.DLL, RASAPI32.DLL, as well as other networking DLLs (W2_32.DLL, WINHTTP.DLL).

Given the off-system communications, I located and extracted the WebCachev01.dat file that contains the IE history for the user, and opened it using ESE DatabaseView.  I found no indication of the host being contacted, via either IP address or name.  Additional testing is required but it would appear that the System.Net.WebClient object used by Powershell does not leave traces in the IE history (i.e., the use of the WinInet API for off-system communications would leave traces in the history).  If that's the case, then from an infrastructure perspective, we need to find another means of detecting this sort of activity, such as through process creation monitoring, the use of web proxies, etc.

Take-Aways
1. Threat intel cannot be based on analysis in isolation.

Okay, I understand that this is just a single document and a single infection, and does not specifically represent an APT-style threat, but the point here is that you can't develop "threat intelligence" by analyzing malware in isolation.  In order to truly develop "threat intelligence", you have to look how the adversary operates within the entire infrastructure eco-system; this includes the network, memory, as well as on the host.

I'm also aware that "APT != malware", and that's absolutely correct.  The findings I've presented here are more indicators than intel, but it should be easy to see not just the value of the analysis, but also how it can be extended.  For example, this analysis might provide the basis for determining how an adversary initially gained access to an infrastructure, i.e., the initial infection vector (IIV).  Unfortunately, due to a number of variables, the IIV is often overlooked, or assumed.  When the IIV is assumed, it's often incorrect.  Determining the IIV can be used to see where modifications can be made within the infrastructure in order to improve prevent, detection, and response.

Looking specifically at the analysis of this weaponized document, Ronnie provided some insight, which I was then able to expand upon, something anyone could have done.  The focus of my analysis was  to look at how the host system was impacted by this malware; I can go back and redo the analysis (re-clone the VM), and run the test again, this time pausing the VM and capturing the memory for analysis via Volatility, and extend the understanding of the impact of this document and malware even further.  Even with just the timeline, the available indicators have been expanded beyond the domain and hash (SHA-256) that was available as of 15 Dec.  By incorporating this analysis, we've effectively moved up the Pyramid of Pain, which is something we should be striving to do.  Also, be sure to check out Aaron's Value of Indicators blog post.

2.  Host analysis significantly extends response capability.

The one big caveat from this analysis is the time delta between "infection" and "response"; due to the nature of the testing, that delta is minimized, and for most environments, is probably unrealistic.  A heavily-used system will likely not have the same wealth of data available, and most systems will very likely not have process creation monitoring (Sysmon).

However, what this analysis does demonstrate is, what is available to the responder should the incident be discovered weeks or months after the initial infection.  One of the biggest misconceptions in incident response is that host-based analysis is expensive and not worth the effort, that it's better to just burn the affected systems down and then rebuild them.  What this analysis demonstrates is that through host analysis, we can find artifacts that persist beyond the deletion/removal of various aspects of the infection.  For example, the file 444.exe was deleted, but the AppCompatCache and Sysmon data provided indications that the file had been executed on the system (the USN change journal data illustrated the creation and subsequent deletion of the file).  And that analysis doesn't have to be expensive, time consuming, or difficult...in fact, it's pretty straightforward and simple, and it provides a wealth of indicators that can be used to scope an incident, even weeks after the initial infection occurred.

3.  Process creation monitoring radically facilitates incident response.

I used Sysmon in this test, which is a pretty good analog for a more comprehensive approach, such as Sysmon + Splunk, or Carbon Black.  Monitoring process creation lets us see command line arguments, parent processes, etc.  By analyzing this sort of activity, we can develop prevention and detection mechanisms.

This also shows us how incident response can be facilitated by the availability of this information.  Ever since my early days of performing IR, I've been asked what, in a perfect world, I'd want to have available to me, and it's always come back to a record of the processes that had been run, as well as the command line options used.  Having this information available in a centralized location would obviate the need to go host-to-host in order to scope the incident, and could be initially facilitated by running searches of the database.

Resources
Lenny Zeltser's Analyzing Malicious Documents Cheat Sheet

Monday, December 29, 2014

Final Post of 2014

As 2014 draws to a close, I thought I'd finish off the year with one last blog post.  In part, I'd like to thank some folks for their contributions over the past year, and to look forward to the coming year for what they (and others) may have in the coming year.

I wanted to thank two people in particular for their contributions to the DFIR field during 2014.  Both have exemplified the best in information sharing, not just in providing technical content but also in providing content that pushes the field toward better analysis processes.

Corey's most recent blog post continues his research into process hollowing, incorporating what he's found with respect to the Poweliks malware.  If you haven't taken a good look at his blog post and incorporated this into your analysis process yet, you should strongly consider doing so very soon.

Maria's post on time stomping was, as always, very insightful.  Maria doesn't blog often but when she does, there's always some great content.  I was glad to see her extend the rudimentary testing I'd done and blogged about, particularly because very recently, I'd seen an example of what she'd blogged about during an engagement I was working on.

Maria's also been getting a lot of mileage out of her Google cookies presentation, which I saw at the OSDFCon this year.  If you haven't looked at the content of her presentation, you really should.  In the words of Hamlet, "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy", and I'm sure Maria was saying, "There are more things in a Windows image than are dreamt of in your timeline."

Tying both Corey and Maria's contributions together, I was doing some analysis recently regarding a particular malware variant that wrote it's files to one location, copied them to another, time stomped those files, and injected itself into the svchost.exe process.  This variant utilized the keystroke logging capability of the malware, and the keystroke log file was re-stomped after each successive update.  It was kind of like an early nerd Christmas gift to see what two well respected members of the community had talked about right there, in the wild.  In the words of one of my favorite characters, "Fascinating."

Volatility
The year would not be complete without a huge THANK YOU to the Volatility folks for all they do, from the framework, to the book, to the training class.  2014 saw me not only attending the course, but also receiving a copy of the book.

Shellbags
On the whole, it might be fair to refer to 2014 (maybe just the latter half) as the "Year of the Shellbag Research".  Eric Zimmerman (Shellbag Explorer), Willi Ballenthin, Dan Pullega, and Joachim Metz should be recognized for the work they've been putting into analyzing and documenting shellbags.  To learn more about what Eric and others have done to further the parsing and analysis of shellbags, be sure to check out David Cowen's Forensic Lunch podcasts (28 Nov, 12 Dec).

TriForce
Speaking of David Cowen, I still think that TriForce is a great example of the outcome of research in the field of forensic analysis.  Seriously.  I don't always use things like the USN change journal in my analysis...sometimes, quite simply, it's not applicable...but when I have incorporated into a timeline (by choice...), the data has proved to be extremely valuable and illuminating.

There are many others who have made significant contributions to the DFIR field over the past year, and I'm sure I'm not going to get to every one of them, but here are a few...

Ken Johnson has updated his file history research.
Basis Technology - Autopsy 3.1
Didier Stevens - FileScanner
Foxton Software - Free Tools
James Habben - Firefox cache and index parsers

Lateral Movement
Part of what I do puts me in the position of tracking a bad guy's lateral movement between systems, so I'm always interested in seeing what other analysts may be seeing.  I ran across a couple of posts on the RSA blog that discussed confirming Remote Desktop Connections (part 1part 2).  I'm glad to see someone use RegRipper, but I was more than a little surprised that other artifacts associated with the use of RDP (either to or from a system) weren't mentioned, such as RemoteConnectionManager Windows Event Log records, and JumpLists (as described in this July, 2013 blog post).

One of the things that I have found...interesting...over time is the number of new sources of artifacts that get added to the Windows operating system with each new iteration.  It's pretty fascinating, really, and something that DFIR analysts should really take advantage of, particularly when we no longer have to rely on a single artifact (a login record in the Security Event Log) as an indicator, but can instead look to clusters of artifacts that serve to provide an indication of activity.  This is particularly valuable when some of the artifacts within the cluster are not available...the remaining artifacts still serve as reliable indicators.

WRA Contest
Finally, as the year draws to a close, here's an update on the WRA 2/e Contest. To date (in over 2 months) there has been only a single submission.  I had hoped that the contest would be much better received (no coding required), but alas, it was not to be the case.

Monday, December 08, 2014

10 Years of Blogging

That's right...my first blog post was ten years ago today.  Wow.

Over the passed ten years, some things have changed, and others haven't.

As the year comes to a close, don't forget about the WRF 2/e Contest.

Thursday, October 23, 2014

WRF 2/e Contest

I recently posted that Syngress has agreed to publish a second edition of Windows Registry Forensics, and in that post, I mentioned that I wanted to provide those in the community with an opportunity to have input into the content of the book prior to it being published.  I know that it's only been a couple of days since the post was published, but historically, requests like these haven't really panned out.  As such, I wanted to take something of a different approach...at the recommendation of a friend, and stealing a page from the Volatility folks, I'm starting a contest for submissions of "case studies" to appear in the second edition.

Contest
So what I'm looking for is submissions of detailed case studies (or "write-ups", "war stories", etc...I don't want to get tangled up on the terminology) of your triumphs via and innovations in Registry analysis.

Please read through this entire blog post before sending in a submission.
What I don't want is case information, user and system names, etc.  Please provide enough detail in your write-up to give context, but not so much that case information is exposed and privacy is violated.

For the moment, I plan to accept submissions until midnight, 31 Dec 2014.  I may extend that in the future...it really depends on how the schedule for the book writing works out, how far I get, how many submissions come in, etc.  The really good submissions will be included in the book, and the author of the submission will received a signed copy of the book.  And yes, when I say "signed", I mean by me.  That also means that your submission needs to include a name and email address, so that I can reach back to you, if your submission is accepted, and get your mailing address.

I'm looking for the top 10 or so submissions; however, if there are more really good ones than just ten, I'll consider adding them, as well.

Consideration will be given to...
Those submissions that require the least effort to incorporate into the book, with respect to spelling and grammar.  I'm all about cut-and-paste, but I don't want to have the copy editor come back with more modifications and edits than there is original text.  I can take care of incorporating the submission into the book in the correct format, but I don't want to have to spend a great deal of time correcting spelling and grammar.

Those submissions that are more complete and thorough, illustrating the overall process.  For example, "...I looked at this value..." or "...I ran RegRipper..." isn't nearly as useful as correlating multiple Registry keys and values, even with other data sources (i.e., Windows Event Logs, etc.).

Those submissions that include more than just, "...I used RegRipper..." or "...I used auto_rip...".  Submissions should talk about how tools (any tools, not just the ones mentioned...) were used.

Those submissions that include process, data, results, RR plugins used, created, or modified, etc.

Note that if you include the newly created or modified plugin along with your submission, the plugin will be added to the RR distribution.

Submissions
Send submissions so to me as text.  Use "WRF 2/e contest submission" as the subject line.   If you have images (screen captures, etc.) that you'd like to share, reference the image in the text ("insert figure 1 here"), and provide the image in TIFF format.

If you have multiple files (the write-up, a plugin, images, etc.), just zip them up.

Please include your name along with the information.  If you do not want your name included in the content when it's added to the book, please specify as such...however, anonymous submissions will not be considered, as I may want to reach back to you and ask a clarifying question (or two).  So, please also be willing to answer questions!  ;-)

Please let me know if it would be okay to post the submission to this blog, and if so, should your name be included (or not).

If you have any questions about this contest, please feel free to ask.

Wednesday, October 22, 2014

RegRipper v2.8 is now on GitHub

RegRipper v2.8 is now available on GitHub.

From this point forward, this repository should be considered THE repository for RegRipper version 2.8.  If you want a copy of RegRipper, just click the "Download ZIP" button on the right of the browser window, and save the file...doing so, you'll have the latest-and-greatest set of plugins available.

If you have any questions, please feel free to contact me.

Tuesday, October 21, 2014

Windows Event Logs

Dan recently tweeted:

Most complete forensics-focused Event Log write-ups?

I have no idea what that means.  I'm going to assume that what Dan's looking for is information regarding Event Logs records that have been found useful or valuable to forensic analysts, or potentially could be.

EVT vs EVTX
Windows XP is no longer supported by Microsoft, but there are still XP and 2003 systems out there, and as such, some of us are still going to need to know the difference between Event Logs (XP, 2003), and Windows Event Logs (Vista+).

Besides the binary differences in the records and Event Log files themselves, on XP/2003, there were three main Event Log files; System, Application, and Security.  On my Windows 7 system, a 'dir' of the winevt\Logs folder reports 143 files.  So, there is a LOT of information being recorded by default on a Windows 7 system; while not all of it may be useful to you, there is a great deal of information that can be extracted from the logs when used properly.

Wevtx.bat
When I released Windows Forensic Toolkit 4/e, one of the things included in the additional materials is a batch file, wevtx.bat.  What the batch file does is use LogParser to parse a directory full of .evtx files, and then parse those entries into TLN format for inclusion in a timeline.  The tool evtxparse.exe, used by the batch file, makes use of a mapping file (i.e., eventmap.txt) to map event source/ID pairs to an artifact category tag.  As such, when the entry in written to a timeline, records such as "Microsoft-Windows-Security-Auditing/4624" are prepended with an appropriate tag (i.e., "[Logon]"), based on the artifact category.

I really love this tool!  What I like about it is that it's easy to update (eventmap.txt is just a text file), I can add comments to it to show the source of the information I used to map an event record to something specific, and it acts as a fantastic little repository for all of my past experiences.  Not only is it a great repository, but it's incorporated right into the tools that I use on just about every engagement.

Records
Here are some of the event source/ID pairs that I've found to be useful during investigations, for such things as malware detection, determining the window of compromise, etc.  I'll say up front that these records are not 100% infallible, and may not have extremely high fidelity (some do, others don't...), but they've worked quite well for me at one time or another, so I'll share them here.

Microsoft-Windows-DNS-Client/1014 – DNS name resolution timeout; I've used this one more than once to help demonstrate that malware was on a system, even in the face of anti-forensics techniques (time stomping the malware files, deleting the malware files, etc.). It's not a 100%, infallible indicator, but it's worked for me more than once.  What has also helped is when this event record was seen; in a timeline, I could see that it occurred shortly after a user logged into a laptop, and before the user connected the system to a WAP.  This helped me narrow down the persistence mechanism for the malware.

Microsoft-Windows-Security-Auditing/4720 - user account created; because the bad guys do this from time to time.

McLogEvent/257 – McAfee malware detection - McAfee AV may detect malware behaviors (i.e., run from a Temp folder, etc.) without actually detecting the EXE itself.  This can be very valuable in helping you determine how malware got onto a system.  Also, the AV product may be configured to warn only, and take no action..so, correlate the event records (UTC) to the entries in the McAfee logs (local system time)

Microsoft-Windows-Windows Defender/3004 – Windows Defender malware detection

Service Control Manager/7045 – A service was installed on the system

Service Control Manager/7030 – A service is configured to interact with the desktop

Microsoft-Windows-TaskScheduler/106 - New Scheduled Task registration

Beyond individual event records (source/ID pairs), one of the aspects of the newer versions of Windows (in particular, Windows 7) is that there are a lot of events that are being recorded by default, across multiple Event Log files.  What I mean is that when some events occur, multiple event records are recorded, often across different Event Log files.  For example, when a user logs into a system at the console, there will be an event recorded in the Security Event Log, a couple in the Microsoft-Windows-TerminalServices-LocalSessionManager/Operational.evtx log, and a couple of events will also be recorded in the Microsoft-Windows-TaskScheduler/Operational.evtx log.  Alone, each of these individual events may get little attention from an analyst, but when placed together in a timeline, they leave an indelible mark indicating that a user logged into the system.

Now, what's really great about this is that some of the Event Logs "roll over" faster than others.  As such, some of the source/ID pairs that are part of an indicator cluster may have been expired from their respective Event Logs.  However, the remaining source/ID pairs in the cluster will still provide a very good indicator that that event in question took place.  This is particular useful for infrequent events, and I've used this information more than once to demonstrate repeated activity going back weeks and even months prior to what was thought to be the date of interest.

Anti-Forensics
Event auditing is one of those things that just happens in the background on Windows systems.  This is great, because sometimes Event Log records can help us determine if anti-forensics techniques have been employed.  For example, using Event Log records, you can determine if someone has changed the system time.

During an exam, I found that a system had been infected with malware that installed as a Windows service, and during the installation process, the .exe file had been time-stomped.  Fortunately, when the malicious service was installed, an event source/ID pair of "Service Control Manager/7045" was created, indicating that a new service had been installed on the system.  I was able to correlate that information with other sources (MFT, etc.) to better determine the correct time of when the malicious .exe was created on the system, and nail down the infection vector.

Carving
If you need to carve Windows Event Log records, for any reason...from unallocated space, memory, the pagefile, whatever...the tool to use is Willi Ballentin's EVTXtract. The "tool" is really a set of Python scripts that you run consecutively against the data in order to recover Windows Event Log records.  I've used these scripts a couple of times, and even had a fellow team member use them on an engagement and quite literally recover the "smoking gun".

When carving for deleted records on a Windows XP or 2003 system, I use a custom Perl script that I wrote that's based on some of the code I've released with my books.

Timelines
When all this is said and done, a blog post on just individual Windows Event Log records isn't really all that valuable.  Yes, I've created timelines from just a handful of *.evtx files, for use in triage, etc.  This has proved to be extremely valuable to me.

Resources
WindowsIR: Timeline Analysis
SANS Reading Room: Detecting Security Events Using Windows Workstation Event Logs
NSA: Spotting the Adversary with Windows Event Log Monitoring

Monday, October 20, 2014

Publishing DFIR Books

I recently received notification that Syngress is interesting in publishing a second edition of Windows Registry Forensics.  I submitted my proposed outline, the reviews of which were apparently favorable enough to warrant a second edition.

I've blogged before regarding writing DFIR books, and that effort seems to have fizzed a bit.  I wanted to take the opportunity to give another shot and see if I couldn't resurrect this topic, or a portion of it, just a bit.  So, the purpose of this blog post is two-fold: to set expectations of the upcoming edition, as well as offer those in the community who are interested to have input into the development (what goes in it) of the book.

Based on the reviews of my proposed outline, as well as some of the online reviews (Amazon, SecurityXploded, etc.), I wanted address a few of the comments I  tend to see more frequently than most, and then allow those who read this post to make their own comments.

From the SecurityXploded review:

It would have been better if author would have put up the approach or step by step procedure one should follow while analyzing live & offline system.

...and...

Putting things straight and then discussing relevant tools at each step will be more beneficial.

This is always an interesting statement to pursue, in part because I see it pretty often in things I've written (such as the HowTo blog posts from July 2013), as well as course materials and presentations I, and others, have put together.  As I'm sure others have done, I try to write something that is general enough to apply across multiple situations, and hope that the reader is able to extrapolate what I've written so it can be used in their specific situation.  My thinking...and it may be wrong...is that analysts must be able to take what they've learned from white papers, presentations, training courses, and other sources, and apply that information and knowledge to what they have available to them.  Not everything that an analyst encounters is going to fit neatly into a training course or whitepaper; there's always going to be some twist or variation, based on the goals of the examination, the data available, etc.  As such, analysts need to be able to build on core principles and basic knowledge to be able to meet the challenges before them.

For example, an analyst should be able to review the SANS checklist for analyzing USB devices on Windows 7, read this HowTo (Correlate an attached device to a user) and this HowTo (Determine user access to files), and be able to determine the files a user may have accessed from a thumb drive that had been attached to their system.  Part of this entails that there has to be some point at which a common, baseline level of knowledge must be assumed.  For example, do you assume that most folks know what the Registry is, or how to determine the CurrentControlSet from a System hive extracted from an acquired image?

This is why I advocate that analysts must share their experiences; no one of knows everything, but together, we can know more than any one of us.  None of us is going to have the same experiences as everyone else, but we can all learn from each other's experiences.

Another comment from the SecurityXploded blog post:

However if you are expecting to cover all those important registry locations then you will be disappointed and it is not feasible in one book. 

This is exactly right, and goes back to one of the core things I learned about writing DFIR books...you're not going to make everyone happy.  Someone's always going to be disappointed about what they didn't find in the book.  But you know what...that's okay.  It has to be...if someone tried to write a book that covered everything, that book would never be completed.

I've been working directly in the DFIR field for a little more than 14 years, and I will be the first to admit, I have not seen everything there is to see.  I've done DFIR in an internal, FTE position, as well as as a consultant.  For a bit more than three years, I did PCI response while at IBM ISS.  While my work has evolved over time, there is still a great deal that I haven't seen, and when I write books or blog posts, I most often base what I write about my own direct experiences.  Now and again, someone will share data with me, and I will learn a little something from their experiences.  I like to sprinkle those indirect experiences in, as well, because they broaden the reach of the material.

In WRF 2/e, I do plan to provide examples of analysis processes I've used, based on goals I've been given, as well as things that I've seen during analysis.  However, no one should expect to pick up this book (or any other DFIR book, for that matter) and find the answer to their specific issue or question.  This is particularly true if no effort is made to contact the author while the book was being written.

From the WRF 2/e outline reviews, as well as from other sources, I've seen this little gem:

You need to provide a chapter on Windows Phone 8.

Anyone remember this blog post?  There were two important take-aways from that blog post.  One was that RegRipper does work with the Windows Phone 8 Registry, and what's needed is sample data and input into what examiners find important, so that plugins can be written.  The second is that the hive files were provided to me...I do not own a Windows Phone 8, nor do I have access to these devices, through work or any other means.  Cindy Murphy did send me some hive files extracted from a Windows Phone 8, but that's one system and a very limited amount of data.  I can't write about what I don't know and didn't experience, and can't provide screen captures illustrating data that I do not have.  Would I like to write more about the Windows Phone 8, particularly the Registry?  Sure.  Without a doubt.  But without actual data, I can't really be expected to write much of anything, can I?

Okay, those are just a few of the comments/statements I've seen in reviews, and like I said, I only wanted to address those that I see regularly.  If you have any thoughts or comments about the content that should appear in WRF 2/e, I'd be glad to hear and consider them.  Thanks.

Monday, October 06, 2014

Stuff

IR
Here's a really good...no, I take that back...a great blog post by Sean Mason on "IR muscle memory".  Take the time to give it a read, it'll be worth it, for no other reason than because it's valuable advice.  Incident response cannot be something that you talk about once and never actually do; it needs to be part of muscle memory.  Can you detect an incident, and if so, how does your organization react?  Or, if you receive an external notification of a security incident, how does your organization respond?


A couple of quotes from the blog post that I found interesting were:

...say, “Containment” without having any understanding of what is involved...

Yes, sometimes a consultant (or CISSP) will say this, and sometimes, there is that lack of understanding of how this will affect the business.  This is why having IR built into the DNA of an organization is so important...understanding how the business will be affected by some response or containment procedure is critical.

There is also a modicum of patience and discipline required when it comes to containment, particularly when it comes to targeted threats.  If the necessary instrumentation is not in place to monitor the environment, then prematurely pulling the trigger on some containment procedures rather than taking the time to prepare and conduct the containment procedures in a near-simultaneous manner will likely cause the threat actors to react, changing what they do.  When dealing with these incidents, if someone on the response team decides, "...hey, I can make that change now, so I'll go ahead and take care of it..." can lead to a lot more work.

Another comment from the blog post:

...as a leader and a technologist, you always want everyone to know everything wing-to-wing, and while this can work great in a small organization the reality is that it doesn’t scale for a number of reasons in larger orgs. 

I agree wholeheartedly with this.  For larger teams in particular, it doesn't scale well for everyone to be an expert in everything, but it does work well to have designated pockets of deep expertise.

I know that I'll never be as good a malware reverse engineer as some of the folks I've had the honor of working with.  I can put a great deal of effort into becoming good at it, but that effort would be effort that I wouldn't be spending become better at DFIR analysis.  Also, I've found that an effective approach is to gather as much as I can about the malware...OS and version it's installed on, where it was found in the file system, persistence mechanism, any artifacts or indicators associated with the malware, etc.  I provide these to the RE analyst, and continue my analysis while they dig deep into the malware itself.  When the RE analyst finds something, they provide it back to me and I continue with my analysis.

A great example of this occurred a number of years ago.  I have found some malware that was used to steal banking credentials (NOT Zeus) and shared it with the RE analyst, providing a second file and the information/intel needed to run the malware.  The malware itself was obfuscated, and in return I got a mutex (I didn't have a memory dump, but I did have a hibernation file and the pagefile), the API used for off-system communications, and other valuable information.  With that, I was able to nail down the specific user affected, the initial infection vector, when the infection occurred, etc.

On smaller teams, you won't be able to have those silos, but on larger teams, in larger organizations, it helps to pockets of deep expertise, and someone you can reach to for further assistance.  This is particularly valuable in incidents, due to the ability to perform parallel analysis; rather than having one analyst who many not, say, analyze disk images on a regular basis try to wring as much information and intel out of an acquired image as they can while working an IR engagement, have that task run in parallel by someone with a deeper expertise.  You're likely to get the info you need (and more) in a much more timely manner, while not loosing any time or focus on the engagement itself.  On smaller teams, you're likely going to have a broader base of skill sets that aren't as deep as what you will find with individuals on larger teams.  Larger teams can take advantage of pockets of skill sets, and even geographic dispersion, to keep the flow of the incident response going.

The rest of Sean's blog post is equally interesting.  Sean goes on to provide his thoughts on people, process, and metrics, all with great insight.

To further Sean's thoughts, a great follow-on to his post is this article from WSJ; in particular, the following quote from that article:

“You are going to get hacked. The bad guy will get you. Whether you are viewed as a success by your board of directors is going to depend on your response.”

IR Fail?
Here's an interesting article from Kelly Jackson Higgins (DarkReading) that talks about Fortune 500 companies having IR teams, but many being pessimistic about their team's ability to handle a data breach.  From my perspective, it's good to see that more firms are moving to having a computer security incident response plan, or CSIRP, and that these companies are actually thinking about things like, "...we need a plan...", and "...how good is our IR team?"  Even if there is pessimism about the current team's effectiveness, at least there's thought going in that direction, and a realization and admission of the team's current state.  From my perspective, this isn't really so much of a failure as it is a success that we've come this far.

From the article:

So why are aren't Target, TJ Maxx, and others sharing their war stories to help the next potential victim?

Yeah, you're not going to.  Sharing is not a natural reaction within the DFIR community.  This doesn't mean that it doesn't happen...years ago, while working an IR with a client, I heard that there was a forum in the local area where IT folks from different organizations in the same vertical came together and discussed issues and solutions.  In fact, the DLP solution that my client had in place, which proved to be extremely valuable during the IR engagement, had been purchased as a result of engaging with others in their community.  My point is, sharing can be powerful, and sharing information or intel that helps the next guy when they're attacked doesn't necessarily give away 'secret sauce' or competitive advantage.

Having an IR plan in place isn't enough, either.

No, it's not.  You can't have a plan written by consultants sitting on a shelf...that's worse than not having a plan at all, because the organization will see that binder sitting on the shelf (literally or figuratively) and think that they've checked a box and have achieved some modicum of success.  A CSIRP needs to be organic to an organization (remember Sean's blog post?); it needs to be owned and practiced by the organization.  You can get assistance in writing it, reviewing it, and practicing the processes laid out in the CSIRP.  Having an outside consulting firm come in and run an IR exercise...anything from a table top (in the military, we called this a "tactical exercise without troops", or TEWT) exercise to a full-on IR engagement...is a fantastic idea.

Over the years, I've seen a wide variety of organizations as a consultant.  I've seen those that have been caught completely by surprise by a data breach, those that have IR plans but do not employ them, and I've seen those that have a practiced plan and want someone there to help guide them.  Invariably, those organizations that have been thinking seriously about the need for incident detection and response end up faring much better than others, in a variety of metrics, including the overall cost of the incident.

RegRipper
In a few short weeks, I will be presenting at OSDFCon, talking about some changes to RegRipper that I've had in the works.  I'll say right now that the changes I've been thinking about and working on are not ones that will significantly impact the use of the tool...so come on by and give it a listen.

OSDFCon and OMFW
I've attended and presented at OSDFCon before, and this is has always been a really great conference to attend.

Whether you're going to be at OSDFCon or not, I highly recommend that you consider attending the Open Memory Forensics Workshop, or OMFW 2014.  This is the premier conference on memory analysis, put on by the top minds in memory analysis, from the Volatility Foundation.

If you're attending OSDFCon, be sure to come see Mari DeGrazia's presentation!

RegRipper Tutorial
Speaking of RegRipper, this tutorial was posted recently regarding how to set up and use RegRipper...I have say, I have somewhat mixed feelings about it.  Part of me appreciates the interest in the tool, but

In the name of full disclosure, the author did contact me and ask me to review the article after it was complete.  I responded, but to be honest, at the time that the request came in, I didn't have the cycles to focus on reviewing the article, and I definitely didn't have the cycles to address everything that I read in the article.  So what you're seeing now is what I've worked on a few minutes at a time, here and there, since the article was published.  I'm not going to address everything in the article, because I simply don't have the time to do so, so what I opted to do was pull out just a couple of comments and address them here.

For example...

I have often heard RegRipper mentioned on forums and websites and how it was supposed to make examining event logs, registry files and other similar files a breeze. 

I'm not sure which forums or websites state this, but this is not the case at all.  RegRipper is named as it is because it's intended for use against the Windows Registry...and only the Registry.  It's not intended for use against any other files, in particular the Windows Event Logs.  Right after I first released RegRipper, I did receive a request to have it parse PST files, but that simply wasn't/isn't practical.

As I wrote earlier there is a huge community out there writing plugins for RegRipper.

First off, there is no mention of "a huge community" in the tutorial, up to that point.  Second, there is not a "huge community out there writing plugins".  Yes, some plugins have been submitted over time, and some folks have suggested modifications to plugins...but there is not a "huge community" by any means.  In fact, my understanding is that the vast majority of users simply download the tool and run the GUI...and that's it.  Asking users specifically, via email or in person, what they'd like to see done to make the tool more useful does not often lead to responses such as requests for new plugins.

I could continue with a lot of the different things I found to be amiss (such as in the Downloads section), but it is not my intent to deride this effort.  Again, I greatly appreciate the interest in the tool, and I wanted to address a couple of the comments because I felt that they were wide-spread misconceptions that should be addressed.  I'm not going to do a walk-through and correct everything I find...instead, I'll refer folks to the various blog posts I've written, as well as to Windows Registry Forensics.



Sunday, September 07, 2014

Windows Phone 8 and RegRipper

Last week, Cindy Murphy (@cindymurph) sent me some Registry hive files...from a Windows Phone 8.  This was pretty fascinating, and fortunate, because I'd never seen a Windows phone, and had no idea if it had a Registry.  Well, thanks to Cindy, I now know that it does!

Looking at the hive files was pretty fascinating.  The first thing I did was open one of the smaller hive files in UltraEdit, and I could clearly see that it followed the basic structure of a Registry hive file (see chapter 2 of Windows Registry Forensics).  Next, I opened one of the hives in a viewer, and saw that the hive file opened nicely; however, there were clearly differences in what I expected to see, with respect to a desktop or laptop running Windows.

Finally, I ran a couple of RegRipper plugins against the System hive that Cindy provided, in part because I saw that there were some keys with the same paths as the ones I generally see on Windows systems.  For example, the compname.pl and timezone.pl plugins worked just fine.  For the Software hive, the profilelist.pl plugin worked just fine, although there was only one profile listed.  Interestingly enough, the SAM hive had the correct structure and a root key, but no subkeys.

So, if there's a question as to whether or not RegRipper works when run against hive files from a Windows Phone 8, the answer is "yes", but with a caveat...you can't expect all of the plugins to work, simply because the current RegRipper plugins are intended to run against hives extracted from Windows computer systems.  I would like to be able to write plugins for the phone hives, but I won't be able to that until more data becomes available and more analysts can identify what it is they find important and of-interest in these hive files.

I'd like to send a thank you to Cindy for sharing the hive files and helping to expand my view into this data source a bit.





Thursday, September 04, 2014

What Does That Look Like, Pt II

In my last post, I talked about sharing what things "look like" on a system, and as something of a follow up to that post, this article was published on the Dell SecureWorks blog, illustrating indicators of the use of lateral movement via the 'at.exe' command.  I wanted to take a moment to provide some additional insight into that post, with a view towards potentially-available indicators that did not make it into the article, simply because I felt that they didn't fit with the focus of the article.

Terminology
Some definitions before moving on...I'm providing these as living, "working" definitions that can be tweaked and modified as we go along.  I know that going into this, there will be those who ask for definitions, as well as those who see the definitions and simply say, "no, that's not what that means"...and that's okay.  We have to start somewhere, right?

Artifact - an element of a data source.  A data source might be a Windows Event Log file, and an artifact would be a Windows Event Log record.

Indicator - an artifact, with some sort of context applied to it.  That context may vary, which means the value of the indicator may vary.  As I mentioned before, sharing indicators, even those we've seen before or those we believe others have already seen is very valuable, in that it allows us to increase the reliability of those indicators.

Some mathy stuff to help provide a description...

Indicator = artifact + context

TTPs - clusters of indicators that can be used to illustrate intruder or user actions

Like I said, these are working definitions that can be tweaked and modified, if necessary.  I do think that they are important to have, as it provides us with a common platform from which to launch discussion and discourse.  Too often, discussions get tangled and confused over terminology and definitions, such as the difference between a Registry key and value; the distinction may be subtle, even irrelevant to some, but to others, they speak to the clarity and precision of the discussion.

If you read through the SecureWorks article, you might think that there are some things missing, particularly from the perspective of the source system in the lateral movement.  The article states that the observed indicator of the lateral movement is an application prefetch file for at.exe, and that's pretty much the case.  The purpose of the article is to show those indicators that (a) are not often looked at, and (b) persist well beyond the removal of tools, etc.

It's clear that for this lateral movement to function properly, the file (or files) launched by the Scheduled Task need to be moved to the destination system before the task is registered.  For example, an executable file might be copied to the destination system using a command such as:

cmd.exe - copy rar.exe \\host\c$\windows\tool1.exe

The above command (which I've obfuscated, for obvious reasons) was found on a source system, in the pagefile.  Again, this was found on a source system involved in lateral movement.  This is just an example of what you might find.  Unlike the use of PSExec, the tool/executable being run needs to be available on the destination system before it can be launched via a Scheduled Task, and the use of the copy command, used in conjunction with compromised credentials, is one way to get the file on the destination system.

Now, let's assume that the tool used ("tool1.exe") is, in fact, a copy of rar.exe and is used to archive some files...you might find an at.exe command similar to the below in the pagefile, as well:

cmd.exe - at \\host 3:00am cmd /c
"c:\windows\tool1.exe a c:\windows\m.exe -m5  c:\windows\r.txt"

Again, this is just an example of what you might find...any actual commands used by the intruder would clearly vary based on what they wanted to achieve.

Something to consider with respect to the above command is the time parameter.  In the article, I provided some indicators to look for with respect to the Scheduled Task being registered, and with the use of the time parameter, you may see a time gap between when the task is registered, and when it's actually executed.  I saw one task that was run a full 30 minutes after it had been registered on the destination system.  This can have an effect on your timeline analysis, so be aware/wary of it.

When it's all said and done, the intruder may then delete files used or created using commands similar to the below:

del \\host\c$\windows\r.txt

What's interesting is that, of the above commands (run on the source system during lateral movement), only the one used to create the Scheduled Task (via at.exe) will result in an application prefetch file being created, as indicated in "Source Host" section of the article (NOTE: this will only occur on a system that is configured to create application prefetch files; by default, Windows server systems do NOT create application prefetch files).  Unless you have some instrumentation in place for monitoring process creation and command lines (Sysmon, Carbon Black, etc.), or if you're able to detect this activity and collect a memory sample from a system relatively quickly, you may miss the above indicators extracted from the pagefile.  Keep in mind, too, that the above commands are simply examples, and were found in the pagefile; as such, they have no time stamps associated with them, and cannot be tied directly to what was seen in the article.

Also, one of the things I've talked about at great length is how much what we see on a Windows system is controlled by values in the Registry; the above indicators would have been obviated if the system was configured such that the pagefile was cleared on shutdown, and the system was cleanly shut down prior to an image being acquired.

Finally, once again, the purpose of the article posted to the Dell SecureWorks blog was to illustrate those indicators that tend to persist over time.

Thursday, August 21, 2014

What does that "look like"?

We've heard this question a lot, haven't we?

I attended a conference about 2 1/2 years ago, and the agenda for that conference had about half a dozen or more presentations that contained "APT" in their title.  I attended several of them, and I have to say...I walked out of some of them.  However, hearing comments from other attendees, many folks felt exactly the same way; not only were they under-whelmed, but I heard several attendees express their disappointment with respect to the content of these presentations.  During one presentation, the speaker stated that the bad guys, "...move laterally."  One of the attendees asked, "what does that look like on systems?", and the speaker's response was to repeat his previous statement.  It was immediately clear that he had no idea...but then, neither did anyone else.

Corey has asked this question in his blog, and he's also done some great work demonstrating what various activities "look like" on systems, such as when systems are exploited using a particular vulnerability.

What I'm referring to in this post isn't (well, mostly...) something like "...look at this Registry key."  Rather, it's about clusters or groups of artifacts that are indicative of an action or event that occurred on a system.

Clustering
As we share and use artifacts, we can take a step back and look at where those artifacts exist on systems.  Then we begin to see that, depending upon the types of cases we're working, artifacts are clustered in relatively few data sources.  What this means is that on drives that are 500GB, 1TB, or larger, I'm really only interested in a few MB of actual data.  This means that during incident response activities, I can focus my attention on those data sources, and more quickly triage systems.  Rather than backing up a van full of hard drives and imaging 300 or more systems, I can quickly narrow down my approach to the few systems that truly need to be acquired and analyzed.

This also means a significant speed-up for digital analysis, as well.  I don't maintain tables of how long it takes to acquire different hard drives, but not long ago, I had one hard drive that took 9 hrs to acquire, and another that was 250GB that took 5 hrs to acquire.  Knowing the data sources that would provide the biggest bang for the buck, I could have retrieved those after I connected the hard drive to the write-blocker, but before I acquired the hard drive image.

Reliability
As we share and use artifacts, we begin to see things again and again.  This is a very good thing, because it shows us that the artifacts are reliable.

Like others, including Corey, I don't see sharing artifacts on any sort of scale.  Yes, there are sites such as forensicartifacts.com, but they don't appear to be heavily trafficked or used.  Also, I generally don't find the types of artifacts I'm looking for at those sites.  I've been achieving reliability, on my own, for various artifacts by using them across cases.  For example, I found one particular artifact that nailed down a particular variant of a lateral movement technique; once I completed my analysis of that system, I went back and searched the entire timeline I'd developed for that system, and found that that artifact was unique to the event I was interested in.  I've since been able to use that artifact to quickly search successive timelines, significantly speeding up my analysis process.  Not finding that artifact is equally important, as well, because (a) it tells me that I need to look for something else, and (b) in searching for it, I can show a client that I've done an extremely thorough job of analysis, and done it much quicker.

Oxidation
A great reason for sharing artifacts is the oxidation of those artifacts.  Okay, so what does this mean?

The type of artifacts I'm referring to are, for the most part, not single artifacts.  Rather, when I talk about what something "looks like" on a system, I'm generally referring to a number of artifacts (Windows Event Log records, Registry keys/values, etc.) clustered "near" each other in a timeline.  How "near" they are...ranging from within the same second to perhaps a couple of seconds apart...can vary.  So, let's say that you're doing some testing and replicating a particular activity, and you immediately "freeze" your test system and find six artifacts that, when clustered near each other, are indicative of the action you took.

How many times as incident responders and digital forensic analysts do we get access to a system immediately after the initial intrusion occurred?  What's more likely to happen is that the initial response occurs hours, days, or even weeks after in the initial incident, and is the result of an alert or a victim notification.  Given the passage of time, these artifact clusters tend to oxidize due to the passage of time, as the system continues to run, and to be used.  For example, if a system becomes infected by a browser drive-by and IE is used in the infrastructure, some artifacts of the drive-by may be oxidized simply due to the normal IE cache maintenance mechanism.  Logs with fixed file sizes roll over, the operating system may delete files based on some sort of timing mechanism, etc.  All of this assumes, of course, that someone hasn't done something to purposely remove those artifacts.

Someone may share a cluster of six artifacts that indicate a particular event.  As others incorporate those artifacts into their analysis, the reliability of that cluster grows.  Then someone analyzes a system on which two of those six artifacts have oxidized, and shares their findings, we can see how reliable the remaining four artifacts can be.

I specifically chose to use the term "oxidize", because the term "expire" or "expired" seem to imply that the lifetime of the artifact had passed.  Sometimes, specific artifacts may not be part of a cluster, due to specific actions taken by the intruder.  For example, we've all seen files be deleted, time stomped, and other actions taken that force artifacts to be removed, rather than allowing them to timeout.

Format
At the moment, there are many questions with respect to sharing artifacts; one is the format.  What format is most useful to get examiners to incorporate those artifacts into their analysis?  Because, after all...what's the point of sharing these artifacts if others aren't going to incorporate them into their analysis processes?

During July, 2013, I posted a number of articles to this blog that I referred to as "HowTos"...they were narratives that described what to look for on systems, given various analysis goals.  Unfortunately, the response (in general) to those articles was the Internet equivalent of "cool story, bro."

A great indicator that I've used comes from pg 553 of The Art of Memory Forensics.  The indicator I'm referring to is pretty easy to pick out on that page...I've highlighted it in green in my book.  The authors shared it with us, and I found it valuable enough to write a RegRipper plugin so that I can incorporate that information directly into my own timelines for analysis...and yes, I have found this artifact extremely accurate and reliable, and as such, very valuable.  Having incorporated this indicator into my work, I began to see other artifacts clustered "around" the indicator in my timelines.  I also found that the indicator maintained it's reliability when some of those other artifacts were oxidized due to the passage of time; in one instance, the *.pf file was deleted due to how Windows XP manages the contents of that folder.

Final Thoughts
A good deal of the indicators that I'm referring to can be abstracted to more general cases.  What I mean is, an artifact cluster that is indicative of targeted threat (or "APT") can be abstracted and used to determine if a system was infected with commodity malware.  Other artifact clusters can similarly be extrapolated to more general cases...it's really more about how reliable the artifacts in the cluster are, than anything else.

Wednesday, July 30, 2014

Book Review: "The Art of Memory Forensics"

I recently received a copy of The Art of Memory Forensics (thanks, Jamie!!), with a request that I write a review of the book.  Being a somewhat outspoken proponent of constructive and thoughtful feedback within the DFIR community, I agreed.

This is the seminal resource/tome on memory analysis, brought to you by THE top minds in the field.  The book covers Windows, Linux, and Mac memory analysis, and as such must be part of every DFIR analyst's reading and reference list.  The book is 858 pages (not including the ToC, Introduction, and index), and is quite literally packed with valuable information.

Some context is necessary...I'm writing this review as someone who has used Volatility for some time, albeit not to it's fullest possible extent.  I'm more of an incident responder, and not so much a malware reverse engineer; I tend to work with some really good malware RE folks and usually go to them for the deeper stuff.  I've converted hibernation files and found some pretty interesting artifacts within the resulting raw memory (my case notes are rife with some of these artifacts), and I've reached to Jamie Levy on several occasions for support.  In addition, I recently completed the five-day Volatility training course.

Also, I spend most of my time working on Windows systems; as such, I cannot offer a great deal of value, nor insight, when it comes to reviewing the information that this book contains on Linux and Mac memory. However, I have worked with some of the folks who provided material for these sections, and I've seen them present at the Open Memory Forensics Workshop (OMFW), and to say that these folks are competent is a gross understatement.

That being said, this book is the most comprehensive reference that covers the topic of memory analysis, from start to finish, available.  The authors begin the book by providing a detailed description of system architecture, as it pertains to memory, discussing address translation and paging (among other topics) before progressing into data structures.  This ground-up approach provides the foundational knowledge that's really required for a complete understanding of memory analysis.  The book then proceeds with a complete walk-through of the Volatility Framework itself, covering topics such as plugins, basic and advanced usage, etc.  There is even a chapter that covers just memory acquisition, addressing tools, tool usage, and hive extraction (using the TSK tools) to assist in profile identification.  All of this information is covered prior to addressing actual memory analysis, so that by the time a reader gets to chapter 5, they should have some understanding of memory structure and how to acquire memory.

Something pointed out in chapter 4 (Memory Acquisition) is worth repeating...that memory acquisition via software is a "topic of heated debate".  While the authors do provide a comprehensive list of software tools that can be used to acquire memory, they also state that the list is not to be viewed as an evaluation, nor should the reader consider the fact that a tool is on the list as an endorsement of that tool.  As such, YMMV based on personal experience...

Throughout the book, the authors bring their incredible wealth of experience to bear in this book, as well.  After all, who better to write a book such as this than the folks who developed the Volatility Framework as a means to meet their own needs in memory analysis, while working on what are arguably the most technologically complex cases seen.  The section on Windows memory forensics covers 14 chapters, and interspersed throughout those chapters are examples of how memory analysis can be used to assist in a wide range of analysis.  Each section starts with an "objectives" section that outlines what the reader can expect to understand once they've completed the section, and many sections provide IRL (or near-IRL) examples of how to use Volatility to support the analysis in question.  As such, the authors are not just providing a "...use this plugin...", as much as they're also providing examples of what the output of the plugin means, and how it pertains to the investigation or analysis in question.

At this point, I've had my copy of the book for a few days, and I've had a ruler and highlighter on hand since I first cracked the spine.  The formatting of the book is such that I've already started adding my own notes to the margins, based on my own exams.  I've found it valuable to go back to case notes and write notes in the margins of the book, adding context from my own exams to what the author's have provided.  This simply increases the value to the book as a reference resource.  In addition, the book is rife with caveats, concerns, and tidbits...such as the section on Timestomping Registry Keys, and what intruders have done that modify the LastWrite time of the Policy\Secrets key in the Security hive.  There's even an entire section on timelining!

If you have an interest in memory analysis, this is THE MUST-HAVE resource!  To say that if you or anyone on your team is analyzing Windows systems and doesn't have this book on your shelf is wrong, is wholly incorrect.  Do NOT keep this book on a shelf...keep it on your desk, and open!  Within the first two weeks of this book arriving into your hands, it should have a well-worn spine, and dirty finger prints and stains on the pages!  If you have a team of analysts, purchase multiple copies and engage the analysts in discussions.  If one of your analysts receives a laptop system for analysis and the report does not include information regarding the analysis of the hibernation file, I would recommend asking them why - they may have a perfectly legitimate reason for not analyzing this file, but if you had read even just a few chapters of this book, you'd understand why memory analysis is too important to ignore.

Thursday, July 24, 2014

File system ops, testing phase 2

As I mentioned in my previous post on this topic, there were two other tests that I wanted to conduct with respect to file system operations and the effects an analyst might expect to observe within the MFT, and the USN change journal.  My thoughts were that if an intruder were accessing a system via RDP, they might not do the drag-and-drop method to move files, or if they were accessing the system via a RAT and they only had command line access, they might use native, command line tools to conduct file operations.

Testing Protocol
All of the same conditions exist from the previous tests, in fact, I didn't even boot the VM between tests.  What I wanted to do this time is look at what effects one could expect to see for copy and move operations conducted via the command line, rather than via the shell.  I wanted to run these tests, as they would better represent the file system operations that may occur during a malware infection.

For this set of tests, I logged into the VM, opened a command prompt and typed the following commands:

C:\>copy c:\tools\eula_30.txt c:\temp\eula_31.txt
C:\>move c:\tools\procmon.exe c:\temp\procmon.exe

Test 1 - copy operation, via the command line

Original file record:

44657      FILE Seq: 3    Links: 1   
[FILE],[BASE RECORD]
.\tools\eula_30.txt
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Thu Jul 17 20:38:52 2014 Z
    B: Fri Jul 28 14:32:44 2006 Z
  FN: eula_30.txt  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Jul 28 14:32:44 2006 Z

Resulting file record:

23643      FILE Seq: 6    Links: 1   
[FILE],[BASE RECORD]
.\temp\eula_31.txt
    M: Fri Nov  8 15:17:17 2013 Z
    A: Thu Jul 24 14:57:41 2014 Z
    C: Thu Jul 24 14:57:41 2014 Z
    B: Thu Jul 24 14:57:41 2014 Z
  FN: eula_31.txt  Parent Ref: 44311/7
  Namespace: 3
    M: Thu Jul 24 14:57:41 2014 Z
    A: Thu Jul 24 14:57:41 2014 Z
    C: Thu Jul 24 14:57:41 2014 Z
    B: Thu Jul 24 14:57:41 2014 Z

From the USN change journal (as with the previous test, these entries are not in order):

eula_31.txt: Named_Data_Extend,Data_Extend,Data_Overwrite,Stream_Change  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: Data_Extend,Data_Overwrite  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: Close,File_Create  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: Data_Extend,Data_Overwrite,Stream_Change  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: Data_Extend  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: Named_Data_Extend,Data_Extend,Data_Overwrite,Named_Data_Overwrite,Close,Stream_Change  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: Named_Data_Extend,Data_Extend,Data_Overwrite,Named_Data_Overwrite,Stream_Change  FileRef: 23643/6  ParentRef: 44311/7
eula_31.txt: File_Create  FileRef: 23643/6  ParentRef: 44311/7

Results:
The results of the file copy operation, with respect to the MFT record (i.e., attribute time stamps, parent ref number, etc.) are identical to what we saw when the test was performed via the shell.  The most notable exception is the absence of references to consent.exe being launched in the USN change journal data.

Test 2 - move operation, via the command line

File record following previous test:

22977      FILE Seq: 12   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.exe
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z

File record following move operation:

22977      FILE Seq: 12   Links: 1   
[FILE],[BASE RECORD]
.\temp\procmon.exe
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 24 14:57:55 2014 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: procmon.exe  Parent Ref: 44311/7
  Namespace: 3
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z

From the USN change journal:

procmon.exe: Rename_New_Name,Close  FileRef: 22977/12  ParentRef: 44311/7
procmon.exe: Rename_New_Name  FileRef: 22977/12  ParentRef: 44311/7
procmon.exe: Rename_Old_Name  FileRef: 22977/12  ParentRef: 44361/32

Results
The results of this test were similar to the results observed in the previous test, with the exception that consent.exe was not run. The only change to the record was the modification of the parent ref number, which was reflected in the MFT entry change (C) time stamp in the $STANDARD_INFORMATION attribute being updated.

Take Aways
A couple of interesting "take aways" from this testing...

1.  When a file is copied or moved via the shell, we can expect to see consent.exe run, and on workstation systems (Win7, Win8.1) an application prefetch/*.pf file created.  This artifact on Win8.1 will be very beneficial, as the structure of *.pf files on that platform allows for up to 8 launch times to be recorded, adding much more granularity to our timelines.

2.  If an intruder accesses a system using compromised credentials, such as via RDP, there can be a great deal of activity 'recorded' in various locations within the system (i.e., Registry, Windows Event Log, etc.).  However, it an intruder is accessing the system via a RAT, there will be an apparent dearth of artifacts on the system, unless the analyst knows where to look.  This, of course, is in lieu of any additional instrumentation used to monitor the endpoints.

3.  For those who perform dynamic analysis of malware and exploit kits for the purposes of developing threat intel, adding this sort of thing to your analysis would very likely assist in developing a much more detailed picture of what's happening on the host, even weeks or months after the fact.

Final Note
I know that this testing is pretty rudimentary, and that much of the results have been documented already (via MS Knowledge Base articles, the SANS 2012 DFIR poster, etc.), but I wanted to take the testing a step further by looking at other artifacts in the individual MFT records, as well as the USN change journal.  In a lot of ways, the results of these tests serve a IoCs that can be used to help analysts add additional context to their timelines, and ultimately to their analysis.

Tuesday, July 22, 2014

File system ops, effects on MFT records

I recently conducted some testing of different actions on a Windows 7 system, with the specific purpose of identifying artifacts within the file system (in this case, the MFT and the USN change journal), particularly within individual records.  I wanted to take a look at the effects of different actions to see what they "look like" within the individual records, as well as within the USN change journal, in hopes that things would pop out that could be used during forensic exams.  Once I completed my testing, I decided to share what I'd done and what I'd found, in hopes that others might find it useful.

Testing Platform: 32-bit Windows 7 Ultimate VM running in Virtual Box.

Tools: My own custom stuff.  I updated the MFT parser included with WFA 4/e, and used usnj.pl to parse the USN Change Journal, and parse.pl to translate the output of the change journal parser into a timeline.  This page at MS identifies that USN record v2 structure, and the reason codes, used by usnj.pl.

Methodology:  I started by writing down and outlining all of the tests that I wanted to perform.  I had a total of 5 tests that I wanted to run in order to see what the effects of each individual action was on the MFT, and individual records within the MFT.  I picked 5 different files within the VM to use in each test, respectively.  Once that was done, I added the VM to FTK Imager as an evidence item and extracted the MFT; this was my "before" sample.  Then, I launched the VM, performed all of the tests, logged out and shut down the VM, and extracted the MFT (my "after" sample) and the USN change journal.

All testing occurred on 17 July 2014.  In all of the tests, I've changed the font color for items of interest to red.

Test 1 - Renaming a file
This was a simple test, but something I hadn't specifically looked at before.  All I did with this one was open a command prompt, change to the directory in question, and issued the command, "ren eula.txt eula30.txt".

Here's the record details from before the test was run:

44657      FILE Seq: 3    Links: 1   
[FILE],[BASE RECORD]
.\tools\Eula.txt
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Jul 28 14:32:44 2006 Z
  FN: Eula.txt  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Nov  8 15:17:17 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Nov  8 15:17:17 2013 Z

...and here are the record details after the test:

44657      FILE Seq: 3    Links: 1   
[FILE],[BASE RECORD]
.\tools\eula_30.txt
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Thu Jul 17 20:38:52 2014 Z
    B: Fri Jul 28 14:32:44 2006 Z
  FN: eula_30.txt  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Jul 28 14:32:44 2006 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Jul 28 14:32:44 2006 Z

Again, this was an atomic action; that is to say, all I did with respect to this file was run the ren command.  I honestly have no idea why the last accessed (A) and creation (B) dates from the $STANDARD_INFORMATION attribute would be copied into the corresponding time stamps of the $FILE_NAME attribute for a rename operation.  However, notice that very little else about the record changed; the record number (from the DWORD at offset 0x2C within the record header), the sequence number, and the parent file reference number remained the same, which is to be expected.

Here are the changes recorded in the USN change journal:

eula_30.txt: Rename_New_Name  FileRef: 44657/3  ParentRef: 44361/32
eula_30.txt: Rename_New_Name,Close  FileRef: 44657/3  ParentRef: 44361/32
Eula.txt: Rename_Old_Name  FileRef: 44657/3  ParentRef: 44361/32

Now, these changes are not in the specific order in which they occurred...they're listed in a timeline, so they occurred within the same second.  But it is interesting that there is rename_old_name and rename_new_name identifiers for the actions that took place.  Perhaps because a good deal of the analysis work that I do comes from corporate environments, I've been seeing a lot of Windows 7 systems with VSCs disabled in the Registry; as such, I haven't had access to an older version of the MFT via a VSC in order to compare record contents, on a per-record basis.  By incorporating the USN change journal into my analysis, I can get some additional context with respect to what I'm seeing.

The use of the USN change journal can also be useful in identifying activity that occurs during a malware infection.  For example, in some cases, malware may create a downloader, use that to download another bit of malware, and then delete the original downloader.  The USN change journal can help you identify that activity, even if the MFT record for the original downloader has been reused and overwritten.

Test 2 - Adding an ADS to a file
For this test, I added an ADS to a file by typing echo "This is an ADS" > procmon.chm:ads.txt at the command prompt.  Now, this file is the ProcMon help file that is included when you download the ProcMon archive from SysInternals, and as such, it already had a Zone.Identifier ADS associated with the file.

The "before" record:

44401      FILE Seq: 11   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.chm
    M: Fri Nov  8 15:17:17 2013 Z
    A: Mon Nov 28 16:46:42 2011 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Mon Nov 28 16:46:42 2011 Z
  FN: procmon.chm  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:16 2013 Z
    A: Fri Nov  8 15:17:16 2013 Z
    C: Fri Nov  8 15:17:16 2013 Z
    B: Fri Nov  8 15:17:16 2013 Z
**ADS: Zone.Identifier

...and the "after" record:

44401      FILE Seq: 11   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.chm
    M: Thu Jul 17 20:39:22 2014 Z
    A: Mon Nov 28 16:46:42 2011 Z
    C: Thu Jul 17 20:39:22 2014 Z
    B: Mon Nov 28 16:46:42 2011 Z
  FN: procmon.chm  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:16 2013 Z
    A: Fri Nov  8 15:17:16 2013 Z
    C: Fri Nov  8 15:17:16 2013 Z
    B: Fri Nov  8 15:17:16 2013 Z
**ADS: ads.txt
**ADS: Zone.Identifier

In this case, you'll notice that only the M (modified) and C (MFT entry change) times in the $STANDARD_INFORMATION attribute have changed.  I would expect that the C (entry changed) time stamp would change, as the addition of an ADS constitutes a change to the MFT record itself, but the M (last modified) time stamp changed, also.

From the USN change journal:

procmon.chm: Stream_Change  FileRef: 44401/11  ParentRef: 44361/32
procmon.chm: Named_Data_Extend,Close,Stream_Change  FileRef: 44401/11  ParentRef: 44361/32
procmon.chm: Named_Data_Extend,Stream_Change  FileRef: 44401/11  ParentRef: 44361/32

So now, if an ADS is suspected, a good place to look for indications of when the ADS was added to a file (or folder) would be to parse the USN change journal and look for stream_change entries.  This can be valuable during an examination because an ADS does not have any unique time stamps associated with it within the MFT record.  An ADS is a $DATA attribute within the MFT record, and as such, does not have a unique $STANDARD_INFORMATION or $FILE_NAME attribute associated with it.

Test 3 - File system tunneling
In this test, I created a batch file named "tunnel.bat" in the C:\Tools folder, with the following contents:

del procmon.exe
echo "This is a test file" > procmon.exe

For this test, I ran the batch file, which deletes procmon.exe and then creates a new file named procmon.exe in the same folder, in relatively short order.  In fact, for file system tunneling to take effect, the entire process has to happen within 15 seconds (by default; the time can be changed, or file system tunneling itself disabled, via the Registry).  As we'll see, the entire process took place within a second.

The original MFT record appears as follows:

44631      FILE Seq: 4    Links: 1   
[FILE],[BASE RECORD]
.\tools\Procmon.exe
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri May 31 20:54:54 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: Procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Nov  8 15:17:17 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Nov  8 15:17:17 2013 Z
**ADS: Zone.Identifier

After the test was run, the MFT record appeared as follows:

44631      FILE Seq: 5    Links: 1   
[FILE],[DELETED],[BASE RECORD]
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri May 31 20:54:54 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: Procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Fri Nov  8 15:17:17 2013 Z
    A: Fri Nov  8 15:17:17 2013 Z
    C: Fri Nov  8 15:17:17 2013 Z
    B: Fri Nov  8 15:17:17 2013 Z
**ADS: Zone.Identifier

Here's the new file record for the file:

22977      FILE Seq: 12   Links: 1   
[FILE],[BASE RECORD]
.\tools\procmon.exe
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z
  FN: procmon.exe  Parent Ref: 44361/32
  Namespace: 3
    M: Thu Jul 17 20:40:35 2014 Z
    A: Thu Jul 17 20:40:35 2014 Z
    C: Thu Jul 17 20:40:35 2014 Z
    B: Fri May 31 20:54:54 2013 Z
[RESIDENT]

Notice that the only difference between the two 44631 records is the sequence number, and that the original file record is now marked "DELETED".  What this illustrates is that the MFT record itself is NOT reused during file system tunneling on NTFS, and that a new record is created during the operation.  This was something I'd wondered about for some time, and now I can see the effect of file system tunneling.

We can see in this case that the MAC times for the new file are all for the date of the testing, and that the B (creation) date is from the original file record.  Also, notice the $FILE_NAME attribute time stamps of the new file...very interesting.

Also, because the file went from being a PE file to a string, the resulting file is now resident; I didn't include the hex dump of the file contents, extracted from the MFT record.

This blog post (from 2005) explains why tunneling exists at all.

From the USN change journal:

procmon.exe: Data_Extend,Close,File_Create  FileRef: 22977/12  ParentRef: 44361/32
procmon.exe: Data_Extend,File_Create  FileRef: 22977/12  ParentRef: 44361/32
procmon.exe: File_Create  FileRef: 22977/12  ParentRef: 44361/32
Procmon.exe: File_Delete,Close  FileRef: 44631/4  ParentRef: 44361/32

When I first read about file system tunneling, I was curious as to whether the original MFT record for the deleted file was simply reused, and this test clearly illustrates that is not the case.

Additional Resources:
Here's a jIIr post from Corey Harrell in which he discusses the use of the USN change journal and file system tunneling
- Eric Huber's blog post on file system tunneling
- Blazer Catzen discussed some file system tunneling testing he'd done on David Cowen's Forensic Lunch podcast, and posted the presentation he'd put together on the subject.

Test 4 - Copy a file to another location in the same volume
In this test, I copied C:\Windows\Logs\IE9_NR_setup.log to C:\Users\IE9_NR_setup.log, using drag-n-drop via the Windows Explorer shell.

From "before" MFT:

96296      FILE Seq: 3    Links: 2   
[FILE],[BASE RECORD]
.\Windows\Logs\IE9_NR_Setup.log
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR~1.LOG  Parent Ref: 1966/1
  Namespace: 2
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR_Setup.log  Parent Ref: 1966/1
  Namespace: 1
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z

From the "after" MFT, the original file:

96296      FILE Seq: 3    Links: 2   
[FILE],[BASE RECORD]
.\Windows\Logs\IE9_NR_Setup.log
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR~1.LOG  Parent Ref: 1966/1
  Namespace: 2
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z
  FN: IE9_NR_Setup.log  Parent Ref: 1966/1
  Namespace: 1
    M: Fri Nov  8 13:26:02 2013 Z
    A: Fri Nov  8 13:26:02 2013 Z
    C: Fri Nov  8 13:26:02 2013 Z
    B: Fri Nov  8 13:26:02 2013 Z

...and the resulting file:

22987      FILE Seq: 12   Links: 2   
[FILE],[BASE RECORD]
.\Users\IE9_NR_Setup.log
    M: Fri Nov  8 13:26:02 2013 Z
    A: Thu Jul 17 20:41:39 2014 Z
    C: Thu Jul 17 20:41:39 2014 Z
    B: Thu Jul 17 20:41:39 2014 Z
  FN: IE9_NR~1.LOG  Parent Ref: 486/1
  Namespace: 2
    M: Thu Jul 17 20:41:39 2014 Z
    A: Thu Jul 17 20:41:39 2014 Z
    C: Thu Jul 17 20:41:39 2014 Z
    B: Thu Jul 17 20:41:39 2014 Z
  FN: IE9_NR_Setup.log  Parent Ref: 486/1
  Namespace: 1
    M: Thu Jul 17 20:41:39 2014 Z
    A: Thu Jul 17 20:41:39 2014 Z
    C: Thu Jul 17 20:41:39 2014 Z
    B: Thu Jul 17 20:41:39 2014 Z

Now, one question you might have is that if I dragged-and-dropped the file, shouldn't the record show indications of the file having been accessed?  Well, we have to remember that as of Vista, the NtfsDisableLastAccessUpdate value is enabled by default, meaning that "normal" user actions won't cause the

From the USN change journal:

IE9_NR_Setup.log: Data_Extend,Data_Overwrite,File_Create  FileRef: 22987/12  ParentRef: 486/1
IE9_NR_Setup.log: File_Create  FileRef: 22987/12  ParentRef: 486/1
IE9_NR_Setup.log: Data_Extend,File_Create  FileRef: 22987/12  ParentRef: 486/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation,Close  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1
IE9_NR_Setup.log: Data_Extend,Data_Overwrite,Close,File_Create  FileRef: 22987/12  ParentRef: 486/1

From the USN change journal, we see a reference to consent.exe being run; this is the dialog that pops up when you drag-and-drop a file between folders, asking if you want to copy or move the file, or cancel the operation.

Test 5 - Move a file to another location in the same volume
Moved C:\Windows\Logs\IE10_NR_setup.log to C:\Temp\IE10_NR_setup.log (drag-n-drop, via the Windows Explorer shell)

The "before" record:

16420      FILE Seq: 15   Links: 2   
[FILE],[BASE RECORD]
.\Windows\Logs\IE10_NR_Setup.log
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_N~1.LOG  Parent Ref: 1966/1
  Namespace: 2
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_NR_Setup.log  Parent Ref: 1966/1
  Namespace: 1
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z

...and the "after" record:

16420      FILE Seq: 15   Links: 2   
[FILE],[BASE RECORD]
.\temp\IE10_NR_Setup.log
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Thu Jul 17 20:41:58 2014 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_N~1.LOG  Parent Ref: 44311/7
  Namespace: 2
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z
  FN: IE10_NR_Setup.log  Parent Ref: 44311/7
  Namespace: 1
    M: Fri Nov  8 14:24:59 2013 Z
    A: Fri Nov  8 14:24:59 2013 Z
    C: Fri Nov  8 14:24:59 2013 Z
    B: Fri Nov  8 14:24:59 2013 Z

Okay, the file was moved (copy + delete operations), but we might expect to see some changes in the time stamps...shouldn't we?  Well, in this case, we cannot tell if the $FILE_NAME attribute time stamps had been changed, because for this file, all of the time stamps, in all of the available attributes, were the same.  We do, however, see that the C (entry modified) time in the $STANDARD_INFORMATION attribute changed (as expected) and that the parent file reference number changed.

From the USN change journal:

IE10_NR_Setup.log: Security_Change  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Rename_New_Name,Close  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Rename_New_Name  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Security_Change,Close  FileRef: 16420/15  ParentRef: 44311/7
IE10_NR_Setup.log: Rename_Old_Name  FileRef: 16420/15  ParentRef: 1966/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation,Close  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1
CONSENT.EXE-531BD9EA.pf: Data_Extend,Data_Truncation  FileRef: 1582/11  ParentRef: 59062/1

Again, we see a reference to consent.exe having been launched.  I'm not entirely sure why the "Security_change" reason code in the USN change journal was generated for a move operation.

Both tests 4 and 5 validate what's described in MS KB article 299648, keeping in mind that the article only discusses time stamps from the $STANDARD_INFORMATION attribute.

Summary
Again, I ran these tests as a means for determining what different file operations look like in the MFT and USN change journal, and what the effects are on individual records.  This information can be helpful in a variety of investigation types, such as malware detection, and finding indications of historical activity and data (i.e., files that are no longer on the system).

Future Efforts
For the future, I'll need to look at copy and move file operations performed at the command line, using the copy and move commands, respectively.