IETF 101: Balance of Concerns

There were many great conversations and much excellent work done last week at IETF 101 in London. As is so often the case lately, many of our decisions had to balance the intersection of privacy and manageability.

TLS is the fundamental protocol that provides security on the Internet -- it provides some very fundamental security properties:

  • Authentication (how do you know you've actually reached Facebook?)
  • Integrity (how do you know no one changed what Facebook sent?)
  • Privacy (how do you know no one else saw what Facebook sent?)

The latest version, 1.3, has just been approved for publication. There's editorial work to be done yet, but the technical pieces are finished and it's really truly coming. Even now, most major browsers are offering draft versions of TLS in at least their beta versions, and many sites are accepting it when they do.

You can go read about the advantages of TLS 1.3 in many other places. Suffice to say that it both removes older, less-reliable cryptography and is the first version of TLS to be formally proven -- it's not just that no one has figured out how to break it yet (*cough* SSL3), but that its security properties can be mathematically proven.

Of course, that's causing angst in certain quarters. One of the things TLS 1.3 gets rid of is called "static Diffie-Hellman". Diffie-Hellman (DH for short) is an exchange where two parties can agree on an encryption key they'll use to speak privately. "Static" means the server doesn't change the key it uses; it uses the same one over time. That's bad, because if anyone were to, say, listen to all your traffic and the server's key were later to fall into the wrong hands, all that past traffic could be decrypted.

With "ephemeral" DH, the server picks a fresh value every time, which means that even if you have the server's long-term keys, you can't open up an encrypted session to examine it. Which is a good thing on the Internet -- you don't want the three-letter agency of your choice, even if they obtain Facebook's private key, to be able to decrypt your web traffic from last month.

If, however, you're a systems analyst trying to debug something in your datacenter, this inability to peer inside your own encrypted traffic is problematic. This isn't to say it's impenetrable, of course -- they're your servers, after all. You can't just keep the server's keys around to decrypt anything you want, but the server obviously has the session-specific keys. So if you want to know what the encryption key was, you just get the server to tell you. Since there's a fresh one for every session, that's potentially a lot to keep track of, but easy enough to log along with other per-session information.

The current iteration of this angst in the TLS working group manifested in a proposal that was called "data center visibility." Essentially, the "client" (in this case, probably something at the edge of the data center) would grant permission to share the keys, and the server would encrypt the keys with another key and send them back to the client. Now, that's just junk to the client -- not only doesn't it have the keys to decrypt what the server sent, it already knows the keys the server is sending. But if the debug tools had the "master keys," they'd be able to get the keys to decrypt everything else.

In some ways, it's an elegant solution. But it has some pretty notable downsides. For one thing, anyone with that master key gets the keys to the kingdom. Every session from that server from the whole time that key was in use. And it's not just read access -- someone with that key can step in, take over the session, and pretend to be the server with the client being none the wiser (or pretend to be the client, with the server unaware).

And, of course, there's a general antipathy to intentionally compromising the security of a security protocol. While there was a vocal minority that wanted to adopt the proposal and work on it as a group, a majority hummed against.

I think, perhaps, a protocol in which the inspector was a known quantity might have better success. If the third party truly only had access to read, not forge, the data for example. Ideally, there would be a way for the parties to elide out data they don't want even the inspector to see ("my password is XXXXXXXX"). Secure multi-party keying is a hard technical problem, but the IETF is good at hard technical problems. Of course, that requires that the inspector be online and actively participating in the exchange, which is a higher bar than sticking info there for a eavesdropper to see.

In the QUIC working group, a more moderate resolution was reached. In order to ensure protocol freedom, QUIC encrypts as much of its data as possible. Therefore, things which maintenance folks have long relied on being able to just overhear are instead encrypted. We're actively working on ways to encrypt more. Because this leaked information ("implicit signals") is no longer visible to the network, the same angst is surfacing: "I rely on this information and you're taking it away!"

There was a proposal to add explicit signals, something visible in the protocol header which can be used to intentionally leak very specific pieces of information that operations folks need to be able to measure. In the tension between "need this data" and "don't want to commit to providing this data," the balance was this:

  • We will reserve at least three bits for "experimentation"
  • We will write down how consenting parties might intentionally leak this data in those three bits, and let people experiment
  • Before we release the final v1, we'll decide whether to fold that into the primary specification, remove the bits, or leave it as a separate "experimental" portion.

No one is entirely happy, and it's likely we'll be having at least one more heated argument (or hum, or coin toss) about this before we're done. But we found a way to reach consensus on a way to move forward that everyone can live with, and I consider that a productive meeting.