|
Authored by: Anonymous on Sunday, April 07 2013 @ 03:04 PM EDT |
Please look again at how public-key encryption systems actually work, because
you seem to be missing something important. In particular, look at the systems
intended to encrypt communications, because that's the variant we're dealing
with here, and it has some important implementation differences that affect what
law enforcement needs to do to get access. I'll use PGP and email as my example
application here, but any system that uses public-key crypto to enforce privacy
of communication will do something similar.
When you send email to a specific person, if you encrypt the email message with
that person's public PGP key, then it's true (in general) that only that person
can read it. But what if you send email to two people? Do you send two encrypted
copies, each one encrypted with a different public key? Messages can be
arbitrarily long, so that would be wasteful, so no application designed by
somebody with half a brain does that. Instead, PGP generates a one-time
symmetric session key which it uses to encrypt the message, encrypts the session
key with the public keys of all the recipients, concatenates the ciphertexts,
and appends the result to the encrypted message. There is only one encrypted
copy of the message; only the encrypted session key is duplicated, and it's a
manageable length so doesn't cause the payload size to increase at a horrible
rate. Only authorized recipients will be able to decrypt the session key and
then the message itself.
Many other encryption systems use similar tricks, in part for efficiency, but
primarily to make them vastly less fragile. Whole-disk encryption systems in
particular do this, which is pretty much the only reason (e.g.) MS ships them.
(Whether MS keeps copis of all the administrative keys to encrypted Windows
volumes is a question I don't want to get into; they sure could, much as they
keep photos from all the Kinect cameras installed on Xboxes.)
I don't know whether Apple has done this with their proprietized user data, but
there's certainly nothing preventing them, and the "mud puddle"
gedanken experiment suggests strongly that this is in fact how Apple's system
works.
This simple crypto trick has the side effect (in this case) that it puts the
data at issue into a different legal protection realm, which would explain why
Apple would not feel required to cough it up in response to a mere wiretap
request. (Not that Apple can't/won't yield it eventually, they just need the
legal "pretty please" that bridges the gap between plaintext
CALEA-category communications and encrypted communications. It's like how Google
"stood up" to the infamous DoJ data requests in 2007, merely to
cooperate once they had publicly made a pretty minor PR point.) Any public
whinging about how Apple "can't" read these communications is
misleading PR to encourage "fixing" the legislation that created this
distinction, by causing thoughtless people to think "Gee, there oughta be a
law that prevents CRIMINALS from using this horrible loophole in our JUSTICE
system."[ Reply to This | Parent | # ]
|
|
|
|
|