"I wonder if pgp is fundamentally flawed, or we have a deep conceptual usability issue here."
I think it's the key model that's fundamentally flawed rather than pgp itself, which I believe the author of the article is also asserting.
In cryptography, it is often explained that despite the fact a one-time pad is guaranteed-secure (given various conditions I'm eliding), it is not practical in the vast majority of cases because of a chicken-and-egg problem: How do you distribute the one time pad in the first place? If you do it insecurely, it's a waste of time. If you can do it "securely", why not just use that secure channel to send the message in the first place? OTPs can still be useful because you can establish a secure channel once for a limited duration of time and then use it to temporally shift your security into the future, but that's a relatively rare use case. (That is, the vast bulk of encryption is being used between people who may never have had a "secure" channel between them; think HTTPS here.)
Similarly, PGP's got this significant problem where given that you have the correct keys and that you know you can trust them, it secures your communication quite effectively. But the question is, how do we get to the point where you know that you have the correct keys and you can trust them? Well... that's a hard problem itself. Especially considered over time.
So alternate models must be pursued.
Like the author, I think the Keybase approach is a good idea. In fact I'd even suggest that the idea should be generalized away from "social media accounts" to just "potentially unreliable mechanism" in general. If I have 6 mechanisms for asserting identity on my key, each of which are 95% reliable over the course of a year, then from an absolutist security point of view, that key is still insecure... but assuming even modest independence between the unreliable mechanisms (assuming naive total independence is definitely incorrect, once one is hacked the others are certainly more likely, but neither is it the case that one hack guarantees all others can be hacked), it's still much more secure than nothing at all.
> How do you distribute the one time pad in the first place?
Something I've wanted to make for a while now, that should be possible to make with almost any cheap embedded microcontroller, is a hardware dongle that stores OTP pads. This would be a generic character device that could be integrated into existing chat programs.
* Each device has a hardware RNG, e.g. [1] or similar
* A port that allows two devices to connect. When connected, they each start generating random numbers, sending a copy to the other device. They both store the XOR of each device's random number as the pad.
* A USB interface accepts plaintext, the device generates the cyphertext, while enforcing deletion of the used portion of the pad. Decryption is handled with a similar interface, so the pads never leave the device.
* The device would provide to the host how much pad is remaining, to be used in the UI. Warnings should be provided when the pad is running low, etc.
The goal is to utilize existing knowledge and experience. Schneier (and others) recommend[2] that passwords be written down because people's understanding of physical security is better than their chances of memorizing enough entropy to actually make a usable password.
This isn't trying to solve the general WoT problem. Instead, it tries to solve a piece of it in a way that most people can understand. Connect devices when you meet in person, and you gain a certain amount of secure chat. Refill by meeting in person again.
It would be easy to extend this idea to provide other features (e.g. generating pubkeys), but since the goal is a simple device that is easy to understand, avoiding feature creep is important, at least initially. Features like WoT will be easier to implement if there is existing infrastructure that can be exploited.
You don't need to store a true one time pad. Keystreams are enough. So, while your device may act like it delivers a one time pad, it could instead draw a pseudo-random sequence from a chacha20 stream. That way, any synchronisation you do lasts for life.
But if we go to all this trouble, we might as well use public key cryptography, it's even easier to use. Internally, the dongle will be quite complicated, with stuff like Curve-stuff, Xchacha-something and poly-whatnot. What the users needs to know is simple:
Once initialised, your Dongle can publish a public a "fingerprint" that is unique to it. To decrypt messages encrypted with this "fingerprint", you need your dongle. To sign messages according to this "fingerprint", you need your dongle. If you lose it, your "fingerprint" becomes unusable, no recourse. If it gets stolen, the thief will be able to impersonate you, unless you did the sensible thing and locked your dongle with a secure passphrase (think Diceware).
Now we engineers can figure out how to make that dongle easy to use and secure against any compromised computer it may be plugged in. (We don't want the dongle to become untrustworthy just because it got out of your sight during lunch).
If you only use the OTP data for a protocol that exchanges symmetric keys, then you could effectively extend the capacity of such a device. I'd buy one - it's an enhanced business card.
I suggest implementing version 0.1 as either a frontend to GPG with key on an existing smartcard (Yubikey or similar), or at least something that uses the existing OpenPGP standard for encryption. That way you don't have to worry about doing the cryptography from scratch and can focus on the UI side (which is hard enough on its own) and the fingerprint exchange (which is what your "extra port" needs to be). You don't have to make sure your device gets critical mass, because existing PGP users can interoperate with it (there already e.g. PGP plugins for instant messenging programs that you could adapt rather than having to write your own plugin ecosystem). And you may well end up producing something that's more useful to more people.
That's exactly the kind of complexity I'm trying to avoid. I would get zero benefit from existing public key infrastructure, because this is a device that enables 1-to-1 communication only. It may be possible in the future to exploit such a device to authenticate GPG keys, but not in the initial version.
> cryptography from scratch
I'm not doing much crypto other than generating random bits (hardware RNG with whitening). One of the reasons for doing this in an embedded device is to keep everything important isolated on the device (data diodes are useful) where the "one use only" rule can be enforced. The computer-accessible attack surface would be extremely small; it's mainly just a USB (or whatever) character device you write plaintext to and read back the OTP'd cyphertext.
> fingerprint exchange
There is no exchange or fingerprints. The entire goal is to have a type of secure communication that is easy to understand, so it can't have complexity like handshakes to exchange stuff or key management. Even if its hidden behind a UI, those features add complexity that affects how you use it.
> gets critical mass
Again, the goal is to not need critical mass. You only need that if you're trying to solve the general WoT problem. I'm only trying to provide communication between a pair of devices that will have to meet in person for synchronization.
> useful to more people
I'm assuming that people that know about GPG can already handle setting up their own secure communication. What I want to try is something that provides some features that everyone can understand. Trying to solve the entire problem at once has always made tools that were too complicated to understand if you have never of crypto. You might consider the device I've described as a kind of "training wheels" for the idea of using crypto.
> producing something
I'm not producing anything right now; this isn't a product of business plan. If I ever find the time and money to work on this project, it will just be a handful of hand soldered RNGs on Arduino boards or similar.
> That's exactly the kind of complexity I'm trying to avoid.
It's only complexity for the implementation, not for the user.
> I would get zero benefit from existing public key infrastructure, because this is a device that enables 1-to-1 communication only.
You'd get to reduce the storage requirements massively and make the devices much more reusable, because you wouldn't have to do exchanges and store the results.
> There is no exchange or fingerprints. The entire goal is to have a type of secure communication that is easy to understand, so it can't have complexity like handshakes to exchange stuff or key management. Even if its hidden behind a UI, those features add complexity that affects how you use it.
The UX is "plug these two devices into each other via some special custom port" either way, no?
> The computer-accessible attack surface would be extremely small; it's mainly just a USB (or whatever) character device you write plaintext to and read back the OTP'd cyphertext.
If the UI is a unix character device then your target audience is a subset of the people who already understand GPG.
> Trying to solve the entire problem at once has always made tools that were too complicated to understand if you have never of crypto. You might consider the device I've described as a kind of "training wheels" for the idea of using crypto.
Right, but part of the point of training wheels is you attach them to a regular bike, you don't use a completely different device. A specialized frontend that only uses a very small simple subset of GPG would be very helpful.
Minimizing complexity is also important when writing a security feature. The entire firmware shouldn't be very large (bugs/kLOC is constant(-ish)), and dependencies increase attack surface.
> reduce the storage requirements
I don't see that as being a huge problem, because flash memory is cheap and I should be able to generate pads very quickly. Remember that the only problem I'm trying to solve is secure chat (text). 1MB of pad is a lot of typing.
I do like the idea mentioned in another comment about using the shared random secret as a stream of symmetric keys, which would nicely reduce the rate of pad usage without adding any more complex semantics.
> If the UI is a unix character device
I'm describing it to you as a character device, because I assume you know approximately what that implies (serialized data stream, etc). The UI for the user, for now, would probably be a plugin for libpurple or something, if I ever get time to write it.
> a very small simple subset of GPG
I've been trying variations of that idea for over 20 years. Many people need a far more rudimentary education about the idea of using crypto. I want to teach the idea of applying security at each end of the conversation. I want to teach the habit of putting an envelop on communication, even when it's just to a friend. I want to teach taking some of the responsibility for your own security instead of relying on 3rd parties ("the cloud").
I've tried to teach very small subsets of GPG already. That didn't work, so I'm simplifying the scope into something that will hopefully be easier to understand.
The "minimal GPG wrapped up in a very simple UI" device that you're talking about would make a great device to graduate into.
In the general case yes, but GPG is probably or at least should be the most carefully audited codebase in the world.
> I do like the idea mentioned in another comment about using the shared random secret as a stream of symmetric keys, which would nicely reduce the rate of pad usage without adding any more complex semantics.
Right, at which point you already need a high-quality symmetric encryption implementation (and definitely need to worry about timing attacks and other side channels - quite possibly something you need already at the true-OTP stage). Such as the one in GPG.
> I've been trying variations of that idea for over 20 years. Many people need a far more rudimentary education about the idea of using crypto. I want to teach the idea of applying security at each end of the conversation. I want to teach the habit of putting an envelop on communication, even when it's just to a friend. I want to teach taking some of the responsibility for your own security instead of relying on 3rd parties ("the cloud").
All good things. I just struggle to believe that the amount of internal-only simplification you get out of making the device OTP-only is worth the cost of requiring storage, becoming text-only, having to have one device for each person you communicate with, having no way to send messages to people you haven't met, and avoiding compatibility with what is still the most widely deployed cryptosystem with any hope of being secure against government-level threats (and the only cryptosystem that we have the NSA on record as being unable to break). I can certainly believe that most of these things aren't worth exposing in the UI, but deliberately using a different standard for the implementation to ensure that you will never have the ability to add even one of those things should they actually prove desirable seems like a poor cost/benefit.
> How do you distribute the one time pad in the first place?
I'm fairly naive to this area, but wouldn't video chat initiated with public keys suffice? Then confirm identities and exchange secrets. To me this seems substantially equivalent to in-person key exchange for non-Three Letter Agency threat models. 20 years ago this wouldn't really have been feasible, but today it (mostly) is -- from a quick glance at the FAQ, Signal may even support something like this already.
(If your threat model includes "abduction and coercion", then aren't you kind of hosed even with previous in-person OTPs?)
A video chat is not enough to safeguard secrets to be used in the future.
For one, if the video chat is secure enough for an otp exchange, the otp isn't needed.
Secondly, if your video chat gets recorded, which may very well happen, you need to use ephemeral keys.
Thirdly, since the video chat is likely recorded, at least the meta information, the effective security of your otp degrades over time, as new breaks or speedup s are created for the video chat cipher.
Interesting, thanks. Am I understanding correctly that point 2 & most of point 3 are risks because of the possibility of either future device compromise, or e.g. quantum decryption technology? These are very general risks, so why do they apply here any more than elsewhere?
I realized I probably should not have replied to the part about OTPs specifically. What I'm curious about is remote trust verification via secure video.
Partially, it's not just future device compromise but also Internet recording. It is best to assume that any communication over the Internet is recorded. From that standpoint, once the keys (not the device) are cracked the internal secret is also disclosed. This was why I recommended ephemeral keys.
By "cracking the keys" a cryptographic break is not always required. It can also happen via disclosure, a weak implementation, problems with the protocol, etc. One can scan a list of recent vulnerabilities for this: session reuse, master secret reuse, session resumption, heartbleed, etc.
I would call these out in particular here, because secrets are being exchanged. If those inner secrets are used to protect (directly or indirectly) multiple messages, the key disclosure becomes more pronounced.
You are quite correct regarding quantum computing. QC is guaranteed to break elliptic curve, DH, or RSA for example. The determining factor is the number of q-bits.
What do you mean by remote trust verification via secure video. That sounds quite interesting. Do you mean facial recognition inside a channel assumed to be secure, as a secondary validation of an otherwise "pre-trusted" party?
If a video chat with public keys is secure enough to exchange a one time pad, why would you need to bother with the one time pad at all?
By transmitting your OTP it is no stronger than the method used to protect it in transport, so if that transport method is secure enough to guarantee the security of the OTP, why not simply use that method for everything and forget about the OTP?
I did say "exchange secrets" for a reason -- that secret may be a key for later use (e.g. for data dumps), or actual information.
What I'm trying to understand is whether the (relatively new) feasibility of interactive video channels allows for building roughly the same level of trust as would be provided by in-person key exchange. I'm basing this on the understanding, possibly incorrect, that encryption with a public key allows for creating a secure communication channel, but not necessarily a trusted one. The hypothesis is that the capability to conduct interactive video provides a way to verify identity and establish trust at roughly the same level as would be provided by in-person exchange (again, assuming 1-1 trust, and excluding TLA threat models).
I might trust a video chat today to verify an identity, but only because it would be a relatively new method. Definitely not in a few years if it ever took off. It's already possible to forge a talking head and fall back on "sorry, bad connection out here in the field" to hide glitches.
You are forgetting a key part of the 'trust' thing; you have no way of knowing if someone is man in the middle attacking your video chat.
Example: Alice wants to video chat with Bob to exchange the secret key and verify identity. Mallory sets up a MITM attack, and gives her own public key to both Alice and Bob. Alice and Bob think they are securely talking with each other, but they are actually securely talking with Mal, who decrypts the video, watches it, then forwards it on to the other person.
This is why you can't have a secure communication channel without trust; you don't know if your secure communication is being intercepted, read, and then passed on.
> How do you distribute the one time pad in the first place? If you do it insecurely, it's a waste of time. If you can do it "securely", why not just use that secure channel to send the message in the first place?
Because you may not have any messages to send at the time of the secure exchange of OTPs. Do note that one time pads are (or at least were) commonly used in the military.
> But the question is, how do we get to the point where you know that you have the correct keys and you can trust them?
That is not a technological problem per se, but rather a social one. Imagine that when you exchange phone numbers (or Farcebook IDs, if you're into that) with your work colleagues, or friends, or fellow attendees at that developer meetup, you also exchanged public keys.
Mechanically, the interaction is at about the same level of complexity, and effectively, as has already been mentioned, the web of trust already exists (Farcebook, ChainedIn, and all the other bollocks).
If any of those decided to implement secure end-to-end comms using PGP and offered you the possibility of uploading your public key for dissemination to your "friends", PGP might become ubiquitous in a matter of weeks. At a smaller scale, German email provider GMX is doing exactly this, by the way.
> Like the author, I think the Keybase approach is a good idea. In fact I'd even suggest that the idea should be generalized away from "social media accounts" to just "potentially unreliable mechanism" in general.
It already has this to a small extent. You can sign other stuff like domain DNS entries or HTTP servers (by hosting a file).
Well you hand it over to the person you want to communicate with when you see them?
Obviously that doesn't work in many use cases, but in many other cases it does: many of the most important secrets are typically shared with people you already know and have met before, no?
If he cared then he'd probably be the one handing over crypto material and instructions (about whatever crypto), as you probably need some clout to make it happen.
If even he says "write me an email in plaintext" then I'm not too hopeful for crypto in General.
But when you discuss the real secrets with a journalist or business partner in another country. Email communication with family members, business partners.
Potentially web traffic with your company's or bank's website, why not.
The author did include the standard UX-of-PGP-sucks arguments, but he was also making the point that some of the core models around PGP suck.
eg he was saying you can't share a key across multiple devices. Or if you do, you just increase your attack vector and your weakest link becomes the hotel wifi you plug into.
eg if your key does get compromised, now you have to rotate all your contacts, which if you distributed your key on a business card, is pretty friction-prone and encourages you to discount that weird activity that could have been a blip you saw on the hotel wifi.
The big one is if your key ever does get compromised, now all your past history becomes accessible. So he's saying there's some things that PGP is fundamentally bad at, and you need a new model, not just a band-aid UX fix.
> Finally, these days I think I care much more about forward secrecy, deniability and ephemerality than I do about iron clad trust. Are you sure you can protect that long-term key forever? Because when an attacker decides to target you and succeeds, it won't have access from that point forwards, but to all your past communications, too. And that's ever more relevant.
> eg he was saying you can't share a key across multiple devices. Or if you do, you just increase your attack vector and your weakest link becomes the hotel wifi you plug into.
So what are the options here? You can have a GPG key protected by any mechanism you care to think of (passphrase, smartcard, ...). You can share it between devices or not as you see fit, subject to the same tradeoff that is always going to be involved in that decision. I can't see any way to do it better?
> eg if your key does get compromised, now you have to rotate all your contacts, which if you distributed your key on a business card, is pretty friction-prone and encourages you to discount that weird activity that could have been a blip you saw on the hotel wifi.
PGP actually has very good support for key rotation by using subkeys - you keep your master identity key offline/secure and that's what other people sign, but you use it only to sign subkeys with short expiry times. People don't use it, but that's a UX issue.
> The big one is if your key ever does get compromised, now all your past history becomes accessible. So he's saying there's some things that PGP is fundamentally bad at, and you need a new model, not just a band-aid UX fix.
True, but I think long-term signing is often what you want. There are different models that make sense for different communication scenarios certainly.
I think it's the key model that's fundamentally flawed rather than pgp itself, which I believe the author of the article is also asserting.
In cryptography, it is often explained that despite the fact a one-time pad is guaranteed-secure (given various conditions I'm eliding), it is not practical in the vast majority of cases because of a chicken-and-egg problem: How do you distribute the one time pad in the first place? If you do it insecurely, it's a waste of time. If you can do it "securely", why not just use that secure channel to send the message in the first place? OTPs can still be useful because you can establish a secure channel once for a limited duration of time and then use it to temporally shift your security into the future, but that's a relatively rare use case. (That is, the vast bulk of encryption is being used between people who may never have had a "secure" channel between them; think HTTPS here.)
Similarly, PGP's got this significant problem where given that you have the correct keys and that you know you can trust them, it secures your communication quite effectively. But the question is, how do we get to the point where you know that you have the correct keys and you can trust them? Well... that's a hard problem itself. Especially considered over time.
So alternate models must be pursued.
Like the author, I think the Keybase approach is a good idea. In fact I'd even suggest that the idea should be generalized away from "social media accounts" to just "potentially unreliable mechanism" in general. If I have 6 mechanisms for asserting identity on my key, each of which are 95% reliable over the course of a year, then from an absolutist security point of view, that key is still insecure... but assuming even modest independence between the unreliable mechanisms (assuming naive total independence is definitely incorrect, once one is hacked the others are certainly more likely, but neither is it the case that one hack guarantees all others can be hacked), it's still much more secure than nothing at all.