Last month, two of our engineers attended the 37th Chaos Communication Congress (37C3) in Hamburg, joining thousands of hackers who gather each year to exchange the latest research and achievements in technology and security. Unlike other tech conferences, this annual gathering focuses on the interaction of technology and society, covering such topics as politics, entertainment, art, sustainability—and, most importantly, security. At the first Congress in the 80s, hackers showcased weaknesses in banking applications over the German BTX system; this year’s theme, “Unlocked,” highlighted breaking technological barriers and exploring new frontiers in digital rights and privacy.
In this blog post, we will review our contributions to the 37C3—spanning binary exploitation and analysis and fuzzing—before highlighting several talks we attended that we recommend listening to.
PWNing meetups
Trail of Bits engineer Dominik Czarnota self-organized two sessions about PWNing, also known as binary exploitation. These meetups showcased Pwndbg and Pwntools, popular tools used during CTF competitions, reverse engineering, and vulnerability research work.
At the first session, Dominik presented Pwndbg, a plugin for GDB that enhances the debugging of low-level code by displaying useful context on each program stop. This context includes the state of the debugged program (its registers, executable code, and stack memory) and dereferenced pointers, which help the user understand the program’s behavior. The presentation showed some of Pwndbg’s features and commands, such as listing memory mappings (vmmap), displaying process information (procinfo), searching memory (search), finding pointers to specific memory mappings (p2p), identifying stack canary values (canary), and controlling the process execution (nextsyscall, stepuntilasm etc.). The presentation concluded with a release of Pwndbg cheatsheets and details on upcoming features, such as tracking GOT function executions and glibc heap use-after-free analysis. These features have been developed as part of Trail of Bits’s winternship program, now in its thirteenth year of welcoming interns who spend time working and doing research on industry’s most challenging problems.
At the second session, Arusekk and Peace-Maker showcased advanced features of Pwntools, a Swiss-army Python library useful for exploit development. They demonstrated expert methods for receiving and sending data (e.g., io.recvregex or io.recvpred); command-line tricks when running exploit scripts (cool environment variables or arguments like DEBUG, NOASLR, or LOG_FILE that set certain config options); and other neat features like libcdb command-line tool, the shellcraft module, and the ROP (return oriented programming) helper. For those who missed it, the slides can be found here.
Next generation fuzzing
In Fuzz Everything, Everywhere, All at Once, the AFL++ and LibAFL team showcased new features in the LibAFL fuzzer. They presented QEMU-based instrumentation to fuzz binary-only targets and used QEMU hooks to enable sanitizers that help find bugs. In addition to QASan—the team’s QEMU-based AddressSanitizer implementation—the team developed an injection sanitizer that goes beyond finding just memory corruption bugs. Using QEMU hooks, SQL, LDAP, XSS or OS command injections can be detected by defining certain rules in a TOML configuration file. Examination of the config file suggests it should be easily extensible to other injections; we just need to know which functions to hook and which payloads to look for.
Although memory corruption bugs will decline with the deployment of memory-safe languages like Rust, fuzzing will continue to play an important role in uncovering other bug classes like injections or logic bugs, so it’s great to see new tools created to detect them.
This presentation’s Q&A session reminded us that oss-fuzz already has a SystemSanitizer that leverages the ptrace syscall, which helped to find a command injection vulnerability in the past.
In the past, Trail of Bits has used LibAFL in our collaboration with Inria on an academic research project called tlspuffin. The goal of the project was to fuzz various TLS implementations, which uncovered several bugs in wolfSSL.
Side channels everywhere
In a talk titled Full AACSess: Exposing and exploiting AACSv2 UHD DRM for your viewing pleasure, Adam Batori presented a concept for side-channel attacks on Intel SGX. Since Trail of Bits frequently conducts audits on projects that use trusted execution environments like Intel SGX (e.g., Mobilecoin), this presentation was particularly intriguing to us.
After providing an overview of the history of DRM for physical media, Adam went into detail on how the team of researchers behind sgx.fail extracted cryptographic key material from the SGX enclave to break DRM on UHD Blu-ray disks to prove the feasibility of real-world side-channel attacks on secure enclaves. Along the way, he discussed many technological features of SGX along the way.
The work and talk prompted discussion about Intel’s decision to discontinue SGX on consumer hardware. Due to the high risk of side channels on low-cost consumer devices, we believe that using Intel SGX for DRM purposes is already dead on arrival. Side-channel attacks are just one example of the often-overlooked challenges that accompany the secure use of enclaves to protect data.
New challenges: Reverse-engineering Rust
Trail of Bits engineers frequently audit software written in Rust. In Rust Binary Analysis, Feature by Feature, Ben Herzog discussed the compilation output of the Rust compiler. Understanding how Rust builds binaries is important, for example, to optimize Rust programs or to understand the interaction between safe and unsafe Rust code. The talk focused on the debug compilation mode to showcase how the Rust compiler generates code for iterating over ranges and uses iterators or optimizes the layout of Rust enums. The presenter also noted that strings in Rust are not null-terminated, which can cause some reverse-engineering tools like Ghidra to produce hard-to-understand output.
The talk author posed four questions that should be answered when encountering function calls related to traits:
- What is the name of the function being called (e.g.,
next
)? - On what type is the function defined (e.g.,
Values<String, Person>
)? - Which type is returned from the function (e.g.,
Option
)? - What trait is the function part of (e.g.,
Iterator<Type=Person>
)?
More details can be found in the blog post by Ben Herzog.
Proprietary cryptography is considered harmful
Under TETRA:BURST, researchers disclosed multiple vulnerabilities in the TETRA radio protocol. The protocol is used by government agencies, police, military, and critical infrastructure across Europe and other areas.
It is striking how proprietary cryptography is still the default in some industries. Hiding the specification from security researchers by requiring them to sign an NDA greatly limits a system’s reviewability.
Due to export controls, several classes of algorithms exist in TETRA. One of the older ones, TEA1, is still actively deployed today but uses a key length of only 32 bits. Even though the specifiers no longer recommend using it, it is still actively being used in the field, which is especially problematic given that these weak protocols are counted upon to protect critical infrastructure.
The researchers demonstrated the exploitability of the vulnerabilities by acquiring radio hardware from online resellers.
Are you sure you own your train? Do you own your car?
In Breaking “DRM” in Polish trains, researchers reported the challenges they encountered after they were recruited by an independent train repair company to determine why some trains no longer operated after being serviced.
Using reverse engineering, the researchers uncovered several anti-features in the trains that made them stop working in various situations (e.g., after they didn’t move for a certain time or when they were located at GPS locations of competitor’s service shops). The talk covers interesting technical details about train software and how the researchers reverse-engineered the firmware, and it questions the extent to which users should have control over the vehicles or devices they own.
What can we learn from hackers as developers and auditors?
Hackers possess a unique problem-solving mindset, showing developers and auditors the importance of creative and unconventional thinking in cybersecurity. The event highlighted the necessity of securing systems correctly, and starting with a well understood threat model. Incorrect or proprietary approaches that rely on obfuscation do not adequately protect the end products. Controls such as hiding cryptographic primitives behind an NDA only obfuscate how the protocol works; they do not make the system more secure, and they make security researchers’ jobs harder.
Emphasizing continuous learning, the congress demonstrated the ever-evolving nature of cybersecurity, urging professionals to stay abreast of the latest threats and technologies. Ethical considerations were a focal point, stressing the responsibility of developers and auditors to respect user privacy and data security in their work.
The collaborative spirit of the hacker community, as seen at 37C3, serves as a model for open communication and mutual learning within the tech industry. At Trail of Bits, we are committed to demonstrating these values by sharing knowledge publicly through publishing blog posts like this one, resources like the Testing Handbook that help developers secure their code, and documentation about our research into zero-knowledge proofs.
Closing words
We highly recommend attending 37C3 in person, even though the date is unfortunately timed between Christmas and New Years, and most talks are live-streamed and available online. The congress includes many self-organized sessions, workshops, and assemblies, making it especially helpful for security researchers. We had initially planned to disclose our recently published LeftoverLocals bug, a vulnerability that affects notable GPU vendors like AMD, Qualcomm, and Apple, at 37C3, but we held off our release date to give GPU vendors more time to fix the bug. The bug disclosure was finally published on January 16; we may report our experience finding and disclosing the bug at the next year’s 38C3!