Whether businesses thrive or shutter depends largely on trust. This is as true of restaurants and fursuit makers as it is of password managers and private messaging apps.
Trust is hard to gain, but easy to lose. Mathematics would therefore indicate that the products and services that the most nervous people trust would be exceptionally rare; perhaps nonexistent.
However, trust is also a very social phenomenon. Society depends on trust relationships to remain functional. At some point, you have to stop scrutinizing in order to get anything done, and that means saying, e.g., “Okay, I’ll trust AMD not to ship a targeted backdoor to the CPU going into the computer I use to draw furry art on.”
Where the trust dynamics get interesting is when you introduce security researchers into the mix. Security researchers include hackers, code-breakers, hobbyists, engineers, and many other types of professionals that generally have one common goal in mind:
To identify vulnerabilities that could hurt users.
Security researchers generally have no ethical obligation to protect the vendor or the vendor’s reputation; only their users or customers. A naïve person might conclude that security researchers are a wild card, in that light.
Most security researchers, however, value their professional relationships and the community they work in, and generally don’t want unnecessary conflicts in their life.
Why am I stating all this? Because I firmly believe that the best lens through which to judge a company’s culture is to examine how they respond to security researchers.
I’d like to talk about some of my experiences with this topic, as well as recent events in the security community.
In 2022, I decided to assess the security of several password managers in order to make a recommendation for my friends’ new business. Two of these password managers had bounty programs on Bugcrowd (which, at the time, was not prepared to handle cryptographic bug reports).
Before I dive into these details, it’s probably worth reading my recent post about password-based cryptography.
Note: I’ve received permission from 1Password to share the details of one of the issues I disclosed to them.
1Password has several implementations of the same protocols, so I decided to look at their Android app.
My initial process for studying Android apps is pretty boring:
classes.dex
, classes2.dex
, etc. filesThis isn’t sufficient for confirming vulnerabilities or developing a proof-of-concept exploit, but it’s enough to identify many cryptographic weaknesses (assuming you’re familiar enough with cryptographic software implementations to find weaknesses just from studying the source code).
With 1Password, my target of interest was their SRP implementation, which in Luyten had a method looked like this at the time of my analysis:
public BigInteger generateVerifier( final byte[] array, final String s, final int n, final String s2, String lowerCase ) throws Exception { lowerCase = lowerCase.toLowerCase(Locale.US); return this.g.modPow( SRP6Util.computeXForPBES2g_HS256WithMethod( "SRPg-496", array, s, n, new AccountKey(s2), lowerCase ), this.N ); }
Notice the string SRPg-496
. Later in the call-chain, it invokes a method to fetch SRP constants based on this parameter, which is implemented like so:
private void setSrpParams() { if (this.mMethod.endsWith("2048")) { this.N = SRPConstants.N_2048; this.g = SRPConstants.g_2048; this.expSize = 32; } else if (this.mMethod.endsWith("4096")) { this.N = SRPConstants.N_4096; this.g = SRPConstants.g_4096; this.expSize = 38; } else if (this.mMethod.endsWith("8192")) { this.N = SRPConstants.N_8192; this.g = SRPConstants.g_8192; this.expSize = 48; } else { this.N = SRPConstants.N_1024; this.g = SRPConstants.g_1024; this.expSize = 32; } }
The string being passed in ends with 496
, not 4096
. This means that their SRP code was falling back to the default case, which uses 1024-bit parameters rather than the intended 4096-bit security level.
All because of what appeared to be a typo!
I wrote a quick report containing this observation (and explaining my understanding of the real-world risk). Within two hours, they responded:
That’s a great observation! I did some quick digging into this code, and luckily (for us) this is far more of an innocent issue than it may appear at first.
In SRP, the clients don’t normally compute the verifier, the server does 1. That’s also the case here: the Android app in normal use doesn’t generate an SRP verifier at all. It turns out that the class you found here was used for testing purposes, and is in fact not called from any of our code at all anymore.
It’s still poor hygiene on our part that that class is even there if it isn’t used at all, so we’ve started tracking an issue internally to get class out of there. Thanks for digging into our code to find this though!
Rick from 1Password
I went on to discover a few more bugs in other 1Password codebases, but I haven’t been given permission to disclose those yet.
However, I can say they consistently had quick, professional, informed, and friendly responses to my reports. They were a delight to report security issues to.
Note: I cannot talk about what I found and reported to LastPass, because they haven’t given me permission to disclose it.
Before I describe my experience, I need to set the stage. My LastPass fun took place around the same time as the infamous Bugcrowd incident with JSBN.
The root cause of that whole debacle is twofold:
When I reported my finding to LastPass, it was (like many other reports) immediately closed by Bugcrowd triage.
At this point, I was like 4 or 5 reports into this pattern, so I knew it needed to be escalated to the security team. For example, this was necessary to get Square to include a quick fix in their KeyWhiz open source project for low-hanging fruit. If I hadn’t escalated, they would’ve missed my report.
In my professional life, I’ve run bug bounty programs before. I’m very sympathetic to the frustrations caused by a lot of low-effort, low-quality bug reports.
If you’re not familiar with how bad it is, my favorite public example is this report.
However, when your company’s only pipeline for reporting security issues is a bug bounty program, you’re kind of forced to go through them. Even if you’re not seeking a payment.
My first step in esclating was security.txt. No dice.
There was no clear security officer or contact information that I could discern from my social network either, so I chose the path of last resort: I contacted their support team.
I don’t recall exactly what I wrote (it was nearly a year ago), but it was probably something to the effect of:
I probably wrote something to this effect. I don’t have a copy anywhere.Hello,
I attempted to report a security issue through your Bugcrowd program. It was closed erroneously because of the Bugcrowd triage team’s misunderstanding.
At your earliest convenience, please ask your security team to look at Bugcrowd submission ID
baf594525cb659a0f20c732d1edcbd428fc4d57e52b8321f37b1f51c9b194170
.Thank you,
Soatok
After a while, I received this email.
Okay, maybe they misunderstood. I replied back, in an attempt to clarify the situation and precisely what I need from them.
I reported a security vulnerability to LastPass’s bug bounty program.
Please ask your security team to look at the linked bug bounty ticket. The triage team shat the bed.
This one is in my inbox!
Okay, but that wasn’t going to help here, was it?
The report was erroneously closed, and therefore it’s a high risk for getting missed by their security team.
I replied, again, specifying what I needed them to do:
Allow me to explain carefully.
I followed the steps in your security page, yes. I reported the issues I found to Bugcrowd.
However, Bugcrowd employees take it upon themselves to triage issues on behalf of their customers.
In this case, the Bugcrowd employees shat the proverbial bed and incorrectly dismissed an issue I reported. Because the issue was closed as Not Applicable (erroneously), it’s unlikely that your security team will notice it without escalating some awareness of this triage error to them.
So please pass that onto your security team so they’re aware to look in the Rejected tab.
This is a simple escalation request. The support team is not obligated to honor it, of course, but it’s probably a good idea to pass it on to make sure the security team is at least aware of the situation so they can take whatever action they consider appropriate (if any).
Was that the end of the saga?
Of course not!
At this point, I’m not sure if I’m arguing with a Markov bot or a real person. We go back and forth a few more times, but they keep responding with various non sequitur.
Finally, I’m beyond frustrated, so I send a heated response.
This is the order of operations so far:
- I identified a cryptographic side-channel in the LastPass software.
- I reported the issue to Bugcrowd with a detailed analysis and a patch for making the function constant-time like it was intended to be.
- Several days after I reported it, a Bugcrowd employee stupidly went “no PoC exploit? not applicable” and closed it erroneously.
- I’ve contacted GoTo support with one goal in mind: To ensure your security team actually sees the report in spite of Bugcrowd closing it.
I don’t care about whether or not your team overrides their decision. I just have an ethical obligation to disclose security issues.
If this isn’t resolved by 5:00 PM Eastern today, I’m going to say “Fuck it” and go Full Disclosure.
Escalate. Tell me when you’ve escalated.
I don’t need your help beyond that.
I don’t like being this mean, but sometimes it’s necessary.
Their direct response to this pointed email?
Really, LastPass? A phishing email?
Okay, fine. At this point they escalated. Deep breaths.
Thank you for escalating.
I don’t understand your question. I reverse engineered your software to study how it works, found a vulnerability, and then reported it.
You’ll never believe what comes next.
A few weeks after they closed my report, news of the LastPass breach spread rapidly.
In 2021, I wrote about some protocol vulnerabilities in Threema that I identified pretty much instantly when I glanced at their code.
In response to me signaling awareness of weaknesses in their codebase, the Threema social media team decided to invoke one of the oldest gaslighting techniques in the security industry: “responsible disclosure“.
In the meantime, several graduate students at ETH Zurich (no relation to the Ethereum cryptocurrency) had researched Threema in depth and found 7 additional issues that they disclosed to the Threema developers.
These additional findings were severe enough for Threema to change their underlying protocol in order to address their research. That’s a fucking statement of the efficacy of a research finding.
All was going well until, in January 2023 (earlier this month), Threema decided to punch down dismissively at their research on Twitter.
It’s difficult to overstate how severely this burned any potential trust for Threema by my friends in the cryptography community.
Congrats on your Lamest Vendor Response Pwnie Award at Black Hat USA this year, I guess?
Speaking of hacker summer camp, I briefly met two of the cryptographers that studied Threema at DEFCON last year.
Small world!
If you do not respond well to security researchers (i.e. LastPass and Threema), I believe you do not deserve users’ trust.
If your business is part of the security industry, and therefore depends on users’ trust to survive, I would rather see your company sink than see my peers suffer from your malice or incompetence. Password managers and encrypted messaging apps belong to this category.
Put simply:
Don’t be a LastPass; be a 1Password.
You don’t have to kiss anyone’s ass. That’s not what I’m saying. I have reported many issues that looked dangerous but later turned out to be less severe than I suspected. You’re always right to push back if you believe an issue is invalid.
But don’t just reply with randomly selected troubleshooting boilerplate when a security researcher tries to escalate. That’s frustrating and frankly dehumanizing (even by furry standards), and ultimately helps no one.
The art on this blog post comes from my Telegram sticker pack, which was created by CMYKat.
Normally I put the credits inline, but some jackass decided to go around replying to months old threads to share a custom uBlock Origin filter that removes the furry art from my blog, so I decided to be spiteful and break their filter for this post.
I also included images that are important for context, so removing all images just makes the post less useful.
If you or anyone wishes there was less furry content on my blog, remember that this is a furry blog first and a technical blog only when I want to talk about technical topics.
And ultimately…
The above comic was created by @loviesophiee; inspired by this comic from 2014.
If you don’t like the furry illustrations on my blog, feel free to not read it. I recommend DNS blackholing the soatok.blog
domain so you never accidentally click on one of my articles again.