If you’re new to reading this blog, you might not already be aware of my efforts to develop end-to-end encryption for ActivityPub-based software. It’s worth being aware of before you continue to read this blog post.
To be very, very clear, this is work I’m doing independent of the W3C or any other standards organization and/or funding source (and they have their own ideas about how to approach it).
Really, I’m doing my own thing and releasing my designs under a public domain-equivalent license so anyone (including the W3C grant awardees) can pick it up and use it, if they see fit.
But the work I’m doing has no official standing and is not representative of anyone (except maybe a lot of other furries interested in technology). They have, emphatically, never endorsed anything I’m doing. I have not talked with any of them about my ideas, nor has my name come up in any of their meeting notes.
My background is in applied cryptography and software security assessments, so I have strong opinions about how such software should be developed.
I’m being very up-front about this because I don’t want anyone to mistake my ideas for anything “official”.
My end goal is pretty straightforward.
Before Musk took it over, Twitter was wonderful for queer people. I’ve even heard it described as the most successful dating platform for the LGBTQIA+ community.
These days, it’s full of Nazis and people who think the ideal version of “free speech” means not being allowed to say the word “cisgender.” But I repeat myself.
The typical threat model for Twitter was: You have to trust the person you’re talking with, and the Twitter corporation, to keep your conversations (or nudes, if we’re being frank about it) private.
With the Fediverse, things are a little more complicated. Instance operators also have access to the plaintext versions of any Direct Messages between you and other participants.
And maybe you trust your instance operator… but do you trust your friends’? And do they trust yours?
If implemented securely, end-to-end encryption saves you from having to care about this injection of additional threat actors to consider.
If not implemented securely, it’s little more than security theater and should be ridiculed loudly.
So it’s natural and obvious for a person with my particular interests and skills to want to solve this problem.
When I started this project, I separated the end goal into 4 separate components:
A lot of hobbyist projects over-index on the fourth component, rather than the actual hard problems. This is why so many doomed projects start with PGP, or implement weird “cipher cascades” to hedge against AES getting broken.
In reality, every component matters for the security of the whole system, but the bulk encryption is boring. It’s the well-tread path of any cryptosystem. The significantly harder parts are key management.
Let’s not mince words: How you implement key management is inherently a political decision.
If that sounds counter-intuitive, meditate on this bit of wisdom for a while:
Repeat after me: all technical problems of sufficient scope or impact are actually political problems first.
Eleanor Saitta
Many projects, when confronted with the complexity of key management, are perfectly happy with “just write private keys to disk” or “put blind trust in AWS KMS.”
Or, more directly: “YOLO.”
With my Fediverse E2EE project, I wanted to minimize the amount of trust you have to place in others. (Especially, minimize the trust needed in Soatok!)
Client-side secrets are the most visible area of risk to end users. Backing up and managing their own credentials, recovering from failure modes, the Mud Puddle test, etc.
Once each participant has secret keys managed (1), they can provide public keys to each other.
Public-key infrastructure (2) is how you decide trust relationships between parties. We’re operating in a federated environment, and want to minimize the amount of unchecked “authority” anyone has, so that complicates matters. But, if it wasn’t challenging, it would already be solved.
Once you’ve figured out a trust mechanism to tie a public key to an identity, you can try to agree on a shared symmetric key securely, even over an untrusted channel.
Key agreement for group messaging (3) is how you decide which shared key to use, and when, and who has access to this key and for how long.
And from there, you can actually encrypt shit (4).
It doesn’t really matter how much you boil the ocean on mitigating hypothetical weaknesses in AES if an adversary can muck with your key management.
Thus, it should hopefully be reasonable to divide the work up in this fashion.
But there is a fifth component; one that I am not qualified to comment on:
User experience.
The final deliverable for my participation in this project will be software libraries (and any necessary patches to server software) to facilitate secure end-to-end encryption between Fediverse users.
As for what that experience looks like? How it’s presented visually? What accessibility features are used, and how? How elements are organized and in what order they are displayed? Any quality-of-life design decisions that delight users and avoid dark patterns?
Yeah, sorry, I’m totally out of my depth here. That’s not my domain.
I will do my damnedest to not make security decisions that are inherently onerous towards making usable software.
(After all, security at the cost of usability comes at the cost of security.)
But I can’t promise that the experience will be totally seamless for everyone, all the time.
One of the things that’s been bothering me, as I work on out the finer details about this end-to-end encryption project, is that it seems to lack ambition.
Sure, I can talk your ear off for hours about the ins and outs of implementing end-to-end encryption securely, but we already have end-to-end encryption apps. So many private messengers.
How does “you can now have encrypted DMs in Mastodon” help people who can already use Signal or WhatsApp? Why should the people who aren’t computer nerds care about it at all?
What’s actually new or exciting about this work?
And, honestly, the best answer I can come up with is that it’s the first step.
Before the Big Data and cloud computing crazes took the technology industry by storm (or any of the messes that followed), most software was designed to work offline. That is, without Internet access.
With the growing ubiquity of Internet access (and mobile networks), the Overton window shifted towards always-on devices, edge computing, and no longer owning anything. Instead, consumers rent licenses to software that a third party can revoke on a whim.
The Free Software movement, for all of the very pronounced personality quirks associated with it today, foresaw this problem long before the modern Internet existed. Technologists, lawyers, and activists spent thousands of person-years of effort on trying to protect end users’ rights from greedy monopolies.
This isn’t a modern problem, by any stretch of the imagination.
Every year, our rights and digital freedoms are eroded by court decisions by corrupt judges, terrible legislature, and questionable leadership.
But the Electronic Frontier Foundation and its friends in other nations have been talking about this and fighting court battles since the 1990s.
Even if I somehow made some small innovation that benefited end users with allowing Fediverse users to message each other privately, that’s not really ambitious either.
As I was noodling over this, a friend of mine linked me to an article titled Rust Needs a Web Framework for Lazy Developers the other day.
It made me realize how much I miss the era when software was offline-first, even if it had online components. The past several years of Live Service Games has exhausted my tolerance more than anything else, but they’re not alone.
When I initially delineated my proposal into 4 components, my goal was to simplify the security analysis and make the threat models digestible.
But it occurred to me, recently, that by abstracting these components (especially the Federated Public Key Infrastructure design), a new era of cypherpunks and pirates could breathe new ambition into software projects that build atop the boring infrastructure I’m building.
Imagine peer-to-peer software that uses the Fediverse and/or onion routing technologies (similar to Tor) to establish peer-to-peer encrypted data tunnels between devices, with the Federated PKI as the source of truth for identity public keys so you always know you’re talking to the correct entity.
Now combine that with developer tools that make it easy for people to self-publish software (even if only through Tor Hidden Services), with an optional way to create a public portal (e.g., for a public-facing website).
You could even create a protocol for people with rack space and spare bandwidth to host said public portals, without biasing for a particular one.
This would allow technologists to build the tools for normal people to create an anti-corporate, decentralized network.
And you could do it without ever mentioning the word “blockchain” (though you may need to tolerate it if you want to prevent anti-porn groups like Exodus Cry from having any say in what we compute).
Finally, imagine that we build all of this in memory-safe languages.
In short: No, I’m not.
Ambitious ideas and cryptography should only intersect rarely. I’m focused on the cryptography.
Instead, I wanted to lay this rough sketch out there as a possibility that someone else–presumably more ambitious, charismatic, and/or resourceful–could easily pick up if they so choose.
More importantly, all of the hard parts of this would be solved problems by the time I finish with the end-to-end encryption project. (Most of them already exist, in fact!)
That’s what I meant above by “it’s the first step”.
Along the way to achieving my own goals, I’m building at least one useful building block. What the rest of the technology industry decides to do with it is up to the rest of us.
I can’t, and will not try, to do it alone.
There is a lot of potential for tech freedom that could benefit users beyond what they can get from the Fediverse today. I wanted to examine how some of these ideas could be useful for–
Oh.
…
Okay, so y’know how a lot of video games (Undertale/Deltarune, Doki Doki Literature Club) try to make a highly immersive experience with many diegetic elements?
Let’s build an operating system, based on some flavor of Linux, that is in and of itself a game. People can write their own DLC by developing packages for that OS. The end deliverable will be a virtual machine, and in order to get it to work on Steam, we would install Docker or Kubernetes, but users will also be able to install it via VirtualBox.
Inevitably, someone will decide this OS is their new daily driver. Imagine the impact this would have on corporate IT the whole world over.
Oh, I can do worse. I can do so much worse.
I don’t know if I can top the various attempts to build a Message Authentication Code out of the insecure RC4 stream cipher, of course.
If you want ambition, you sacrifice wisdom.
If you want freedom, you sacrifice convenience.
If you want security, you sacrifice usability.
…
Or do you?
I have a lot of bad ideas, all the time. That’s the only reason I ever occasionally have moderately good ones.
My process of eliminating bad ideas is ruthless, and may cull some interesting or fun ones along the way. This is an unfortunate side-effect of being an effective security engineer.
I don’t actually think the ideas I’ve written above are that bad. I wrote them this way for comedic effect.
Rather, I’m just not actually sure they’re actually good, or worthwhile to invest time into.
Whether someone could build atop the work I’m doing to reclaim our Internet from the grip of massive technology corporations is, at best, difficult to classify.
I do not have the time, energy, or motivation to do the work already on my own plate and then explore these ideas fully.
Maybe someone reading this does?
If not, that’s cool. Ideas are allowed to just exist as idle curiosities. Not everything has to matter all the time.
The “ship a whole god damn OS as an indie
game” idea could be fun though.