May 20, 2022 in Archaeology, Batch Analysis, Clustering, Forensic Analysis
Long before endpoint event logging became a norm it was incredibly difficult to collect information about popular processes, services, paths, CLSIDs, etc.. Antivirus companies, and later sandbox companies had tones of such metadata, but an average Joe could only dream about it.
This is where HijackThis came to play. At a certain point in history, lots of people were using it and were posting its logs on forums – for hobbyist malware analysts to review. And since HijackThis Log has a very specific ‘look and feel’, it was pretty easy to parse it. And find it.
In order to collect as many logs as possible, I wrote a simple crawler that would google around for very specific keywords, collect the results, then visit the pages, download them to a file, and parse the result. Each session would end up with a file like this:
[Processes - Full Path names] [Processes - Names] [Directories] [All URLs] [Registry - Full Path names] [Registry - Names] [Registry - Values] [BAD URLs] [CLSIDs]
There are plenty of uses for the collected data — one of the handy ones back then was a comprehensive list of CLSIDs — knowing these, you could incorporate these into a simple binary/string signature and search for them inside analyzed samples. If a given, specific CLSID was found, it was quite easy to ID the sample association or at least, some of its features. Another interesting list of artifacts is rundll32.exe invocations. There are many legitimate ones and it’s nice to be able to query them all and put them together on a ‘clean’ list. Of course, URLs are always a good source for downloads, and directories and paths, as well as registry entries and process/service list handy for generating statistics on which paths are normal and which are not. A list of ‘known clean’ that could be a foundation for a more advanced version of Least Frequency Occurrence (LFO) analysis. And even browsing file paths is an interesting exercise as well as – for example, it allowed me to collect information about many possible file names of interest (f.ex. these that could be used in anti-* tricks).
I had a lot of ideas around that time on incorporating these research ideas into my forensic analysis workflow. For instance, if we knew certain paths are very prevalent, it kinda makes sense to exclude them from analysis. Same goes for other artifacts. And a twin idea from around that time was filelighting – it’s common for software directories to include a list of files that are referenced in at least one of the other files. That is, if I find a file foo.bar inside program directory, there is a high possibility that at least one of the other files – be it executables, or configuration files – will reference that foo.bar file! It actually works quite well. And the main deliverable of this idea was that if we can find orphaned files, they are suspicious. And, from a different angle, if we know what clusters belong to what software package, we could use that tree of self-referencing file names to eliminate them from analysis.
Times have changed, of course, and while these ideas may still have some value, reality is that we live in a completely different world today. The existing
In the end, I cannot say the database helped me a lot, but it was an interesting exercise, and since the data is quite obsolete by now I decided to drop its content online. It’s not a very clean data set, mind you. You will find errors in parsing, some HJT logs were truncated, referred to non-English characters, etc. Still, maybe you will find some use for it. Good luck!
Download it here.