Obligatory FAQ note: Sometimes I get asked questions, e.g. on my Discord/IRC, via e-mail, or during my livestreams. And sometimes I get asked the same question repeatedly. To save myself some time and be able to give the same answer instead of conflicting ones, I decided to write up selected answers in separate blog posts. Please remember that these answers aren't necessarily authoritative - they are limited by my experience, my knowledge, and my opinions on things. Do look in the comment section as well - a lot of smart people read my blog and might have a different, and likely better, answer to the same question. If you disagree or just have something to add - by all means, please do comment.
Q: I love low-level exploitation and exploit development! How can I make this my whole career?
A: So to not bury the lead, the problem is that low-level exploitation is rarely needed in cybersecurity, and jobs where one works mostly on low-level exploitation are few and far between. Furthermore, these jobs are even more rare if one wants to stay away from the gray area of hacking and away from the black market. It's more common for low-level exploitation to be a small occasional part of another role.
DISCLAIMER: The goal of this post is not to discourage anyone from pursuing a career in low-level hacking, nor do I think that it isn't an important area of cybersecurity. Rather than that, the goal is to give folks enough information to think things through and plan their approach instead of walking into this blindly.
Let's start with a bit of background...
While the start of the path of learning hacking / cybersecurity changes from time to time, sooner or later it leads folks to try low-level security. This commonly starts with a bit of reverse code engineering – the one on binary / assembly level – paired up with getting to know how to use a debugger and learning the CPU architecture, and leads down the path to learning low-level vulnerability classes and eventually their successful exploitation. While a simple 1988-style stack-based buffer overflow is easy to learn, things get more and more complex as one progresses closer and closer to the newest developments. This is due to the neverending arms race between defensive and offensive sides, that results in new mitigations, as well as the inevitable new methods to bypass them or shifts in approaches.
As such, low-level exploitation is currently one of the most technically complex and challenging areas of cybersecurity. And given the pleasure one feels after successfully making a code execution exploit in a complicated and constrained scenario, it's also immensely gratifying and fulfilling.
Understandably a question like "how do I make this my job" is an obvious one.
Let's start by stating something obvious, which I still believe must be stated (even though we don't like to hear it): companies prefer to pay for things which they believe are useful and/or beneficial for the company. Pursuing further this point, we can ask the question: how can low-level exploitation, exploit development, and low-level exploits be useful for a company? Let's go through these one by one.
Starting with the obvious – fully "weaponized" exploits used for their natural purpose, i.e. to hack into things.
So who actually "hacks into things"? We have a couple of groups (in order from less to more... ethically complex if you will):
Let's consider pentesters first. And we have to be honest here: in the great majority of pentests folks use off-the-shelf exploits, ideally integrated into metasploit or readily available on exploit-db.com. There is just no time during a pentest to spend on making highly complex low-level exploits that operate in modern heavily-mitigated environments, which also commonly are in the prolonged "exploit superposition" state of "it may or may not work - we'll see" – especially that these can take even a week or more to make and require a lot of skill and obscure knowledge. Unless this truly was the goal of the pentest (not likely), no reasonable pentesting client would like to pay for this, given that the alternative cost is NOT spending this time on reviewing other parts of infrastructure, and also that the risk of a random attacker actually making this kind of exploit and using it to hack the company is minimal (we all know that realistically they will just phish a C-level exec).
"But wait!" you might say, "who is actually making these metasploit / exploit-db off-the-shelf exploits then?" And while what's a good question, I think a better one is "when are folks making these exploits". The common answer is: in their spare time, i.e. not at work. Also, it's pretty common for these integrated exploits to actually be the result of the original vulnerability researcher's Proof-of-Concept-exploit being adapted, integrated, or otherwise "weaponized". Admittedly this kind of adaptation is something that might be done during a pentest, as it's faster than developing everything from scratch – but also it takes out the most fun part of the process. For completeness let's add that at times the source of the initial exploit could be different – e.g. an attack observed (captured) in the wild or a leaked batch of tools from a three-letter agency.
The bottom line for a job in pentesting therefore is, that you don't really get to do much end-to-end low-level exploitation there. You might get to use a low-level exploit someone else made, and from time to time you might need to adapt some exploit or modify it a bit to actually work, but that's the extent of it.
Moving onward to internal security teams. In the case of red teams it's pretty much the same story, with the major difference being that there might be more internal custom systems written a decade or three ago in C or C++ viewed as a viable exercise vector. But there of course wouldn't be any specific long-term focus on these kinds of systems, as only some exercises would touch these. Furthermore, sooner or later the conclusion reached will be something along the lines of "oh, we know it's a weak spot; the blue team has it on their todo list, so let's ignore it for now and focus on other things." So again, rare occasional opportunities for low-level exploitation.
Beyond red-teaming or other-similar-color-teaming exercises there is rarely any need for exploits. For example, from an infrastructure or application security team's perspective, it's key for finding weak spots and vulnerabilities. This of course includes low-level vulnerabilities (a ha!). However... no exploit is usually needed, PoC or otherwise – this is because the end goal isn't to hack this or that, but rather to secure this or that. So in an ideal world, a discovered vulnerability (or even a potential vulnerability) is filed as a security bug with the appropriate dev team, which then fixes it regardless of whether someone actually proved exploitability. There is just no need for a highly skilled person to spend a week on proving exploitation if a fix is a one-liner, done, tested, and deployed in an hour of active work.
This of course points to the two cases, where making an exploit might actually come into play. The first case is when the dev team flat out denies a fix because they don't believe it's a problem, don't think it's an important enough problem, or have more important things to do. This of course is a clear signal of deeper organization problems both in communications and/or intra-team cooperation (yes, soft skills are important in tech, and even more so in IT security). Regardless, at times the decision might just be to prove the problem exists by proving exploitation (yes! we get to work on an exploit!), and therefore showcasing what potentially could happen if the problem is not addressed. I think it's fair to say that most people who are a decade or two in this area of security have or know of a story like this. These however are pretty rare occurrences and rarely more than one demonstration is ever needed.
The second case is when the root cause of a vulnerability is buried deep deep into the whole architecture and changing the offending design would be both costly and time consuming. A great example of this are e.g. Spectre/Meltdown vulnerabilities in the x86 CPUs, or the Row hammer DRAM problem. In such a case actually having a few people spend a few weeks on figuring out if exploitation is possible and how to do it is actually the cheaper option, than jumping straight from the get go to changing everything. These situations however are rare and limited to companies which actually work a lot with low-level products – maybe Microsoft, Intel, AMD, and a few others. And at times specialized vulnerability researchers are contracted to work on these instead – but we'll get to vulnerability research a bit later.
So again – yes, there is some low-level exploitation here, but it's rare and there's hardly enough of it to make a full time long term low-level exploitation career of it.
Next on the list is law enforcement. And the short answer is that no, nobody makes exploits here. Law enforcement does buy certain solutions, which under the hood use exploits, e.g. to hack into a suspect's smartphone, but that's it.
Situation in the military is a mix of the pentesting and law enforcement approach, with the added twist that if your country is in an active conflict, you're treated as a combatant with all the dangers that come with that.
And then we get to intelligence and espionage agencies and their suppliers – and this is where a lot of actual low-level exploitation and end-to-end exploit development happens. And at the same time this is pretty much a legal gray area – what's legal or otherwise sanctioned by the employing country, is hardly welcomed by the the target countries – so from the get go one has to make some moral and ethical decisions, and know they won't be allowed to talk too much about their work, like ever (i.e. until it's leaked).
So to summarize this section, the groups that use exploits fall into two categories: the "basically not making exploit" and the "your country's three letter agency" one.
As already signaled above, making and using exploits isn't necessarily tied up, as both areas are pretty specialized and require certain unique skills and knowledge. So when is ever working on low-level exploits useful for a company enough to make it a job role? Here's a new list for us to go through:
Vulnerability research is a bit of a loose term, since it's understood as – depending on the context – looking for vulnerabilities, looking for new ways to look for vulnerabilities, looking for new types of vulnerabilities, looking for new ways to bypass protections, mitigations, and so on, and either exploiting or looking for new ways to exploit vulnerabilities. While admittedly there is some vulnerability research in e.g. pentesting or red teaming, this is often thought as a separate area one can specialize in. And for this or that reason when someone says "vulnerability research" they usually do mean "low-level".
Sounds great, right? "There must be a lot of low-level exploitation there! So, where can I get employed as a vulnerability researcher?"
Good news is that there are legit security research companies that employ vulnerability researchers that do not focus on selling 0-days (we'll get to these in a moment)! Such security research companies get contracted or called in when e.g. an OS developer or a a CPU manufacturer wants to verify the design or implementation of a new security feature, or to assess how hard is to exploit a bug vulnerable before committing to redesign a large piece of a system to address it (in case an internal team doesn't handle that as mentioned before). Furthermore there are other companies which might have an external-facing vulnerability research team for other reasons – probably the best known example being Google Project Zero which mission is to "make the discovery and exploitation of security vulnerabilities more difficult, and to significantly improve the safety and security of the Internet for everyone" (source).
Bad news is that these are really rare, rarely have openings, and have a very high bar to get hired into them.
Since the example above talks about "external-facing" research teams, there must also exist internal-facing vulnerability research teams, right? Correct, though these are more restricted in terms of targets one can choose. E.g. in a web-first company there might not be any low-level internal targets to choose from or they might be rare and quickly swiped by other interested folks.
It must also be added that in some companies there might be roles where vulnerability research is a small part of a larger role. For example, if a company implements specialized compilers or works on an operating system, there should be someone on the team to work on mitigations – both as in development and as in testing. And what's a better way to test a mitigation than to attempt to write an exploit which bypasses it?
Vulnerability research might also be a perk attached to another security role. For example, cybersecurity companies which offer various security services or security products might welcome some time spent on finding vulnerabilities and writing PoC exploits. Successful research gives the company a chance to get its name out there, demonstrate technical prowess, and do some good at the same time – this is why cybersecurity marketing is on the list above. Admittedly a consequence of this is the controversial topic of named vulnerabilities with logos, but let's not get into that discussion here.
And then again we get into the gray area of three-letter agencies, or rather their suppliers/contractors. Probably the biggest job market for low-level exploitation lies in the 0-day industry and exploit "factories" that sell their work to... only sanctioned allied governments and entities, of course. What must be said, is that there is little transparency and little control for an exploit author over what said exploit is later used for and by whom. This of course leads to some complex questions from both the legal and ethical side. And there can be times where one later would learn that their work has been used by a drug cartel or this or that government to hunt down some journalists (if this sounds grim, that's the intention – these are tough questions one should be aware of, esp. if the person is considering this kind of role).
Same considerations apply to indirect suppliers, e.g. folks working on 0-days as freelancers and selling them to brokers or the highest bidder on a black market – there's the same amount of transparency and control of how an exploit is used in this case, that is to say: none at all. And it's also something one will think ten times about putting in their resume, as it might attract legal trouble.
Last on my list are bug bounties and I've included them since there are some folks who actually made it their whole career. It has to be noted, that the great majority of bug bounties are pretty high on the abstraction stack (i.e. web), but at the same time low-level vulnerabilities with exploits have at times pretty high rewards (e.g. for some time Google offered $133k for whole Linux kernel exploits under certain additional conditions). At the same time there are some issues with bug bounties as well, mostly related to randomness of chaotic systems (i.e. our world). First of all, one has to actually find a vulnerability, which is actually exploitable – and there might be weeks or months in-between good findings. But also one has to submit it as the first person – receiving a "duplicate / reported before" response after three weeks of work can be heartbreaking. So while this is an option, it's not smooth sailing by any means.
The whole cybersecurity job market is huge and it seems to be still growing. However, low-level exploitation is a very very small niche in the whole industry. It's cool and awesome, but sadly rarely required. While jobs focusing fully on low-level exploitation do exist, there are very few of them, and even fewer if one doesn't want to answer any hard ethical questions. Furthermore, due to the scarcity of jobs in this segment, the hiring bar is pretty high. It's a bit easier to find a job where low-level exploitation is a small part of the role – there one gets to work on low-level stuff from time to time, even if not as often as one would like. And then we have the whole bug bounty thing, which is where high skill requirements meet the lottery.
Furthermore, the reason there are so many exploits readily available is mostly hobbyists working on this in their spare time. And it's a similar story with a good chunk of vulnerability, or more generally – security, research, which is done after work pro bono by hackers and open-source enthusiasts. This work is important for both the defensive community and pentesters, but hardly a directly paying job (there are resume-level benefits of course).
So should you pursue your dream of being a full time low-level vulnerability researcher and exploit dev? And how should you approach this? Well, these are the questions for you to answer yourself, but I do hope you're a bit more equipped with knowledge to make that choice.
--
Gynvael Coldwind