Yesterday’s New York Times carried a story entitled “Apple and other tech companies tangle with U.S. over data access“. It’s a vague headline that manages to obscure the real thrust of the story, which is that according to reporters at the Times, Apple has not been forced to backdoor their popular encrypted iMessage system. This flies in the face of some rumors to the contrary.
While there’s not much new information in here, people on Twitter seem to have some renewed interest in how iMessage works; whether Apple could backdoor it if they wanted to; and whether the courts could force them to. The answers to those questions are respectively: “very well”, “absolutely”, and “do I look like a national security lawyer?”
As the article states, it all comes down to trust.
It should not be about blind trust. It should be about the ability to look at the code and validate that trust.
But let’s face it, the opposite is has always been true… “I don’t use company X’s products because they are out to get me. But company Y has my best interests at heart based on this completely unsubstantiated statement on a press release.”
It’s part of human nature to want to believe.
Edited 2015-09-15 00:11 UTC
Sodki,
I agree that the best policy is for everyone to see the code and validate it. But even if Apple’s software was completely open source, it doesn’t necessarily mean that apple are running that source with no other hidden wiretaps – users have to trust that apple aren’t bugging the system.
Secondly, even if apple is not silently modifying the server software, we still have to trust that they haven’t handed the keys over to agencies.
The messages are encrypted end to end so to Apple can’t just ‘hand over keys’ as this would be useless ( it would only allow the authorities to send you iMessages, which they can already do ).
There is, in fact, nothing Apple can do to get your existing messages.
The only practical way that Apple can help the authorities would require Apple to turn off device authentication for a specific user and then attach that device to the users account so that iMessages would be distributed to that device ( as Apple synchronizes these ).
Emphasis mine. That disclaimer would not be needed if they did not have the ability to read your iCloud backups. And a court order to obtain message history is not a wire tap, it would be a search and seizure warrant (which is much easier for police to get).
Weasle words FTW.
Well, you are certainly correct that Apple can be compelled to hand over a copy of your backup and potentially be compelled to decrypt it.
However this is not really a function of iMessage in any way. I can disable the backup and still use iMessage.
kristoph,
We usually don’t talk about vulnerabilities stemming from the “good guys”, but lets be absolutely clear: since apple has root access to your device, they have many practical ways of accessing user data if they wanted to. Unless no-one else has control over your device, then trust is paramount.
You misunderstood the passage in the article. Apple does not have the keys. Asymmetric keys are created on your devices and the public key is sent to senders who use the key to encrypt the message that is sent back to you and can only be decrypted by the device.
Apple DOES NOT control the keys but they do control the distribution of the keys which is where the weak point is.
kristoph,
Low and behold, if apple wanted to, they could modify their service to wiretap a device’s imessages by giving devices a fraudulent key for encryption. Now with this context in mind, re-read my original post “even if apple is not silently modifying the server software, we still have to trust that they haven’t handed the keys over to agencies.” In retrospect I probably needed to be much more verbose, but this means that not only can apple pull off this attack by implementing targeted wiretap in their service, but another party can also pull it off if they have apple’s private keys (in order to impersonate apple’s servers when imessage asks which keys to use for encryption).
Do you understand how it’s possible for an agency to implement wiretapping capabilities for imessage, assuming they can get apple’s private keys? Let me know if there’s anything I haven’t explained sufficiently.
Edited 2015-09-16 06:09 UTC
kristoph,
Rereading the thread, it occurs to me that you read my posts assuming that I was talking about apple handing over USER keys to the NSA, however all along I’ve been talking about apple handing over APPLE’s keys to the NSA.
With this clarification in mind, do you have any other disagreements with anything I’ve said?
They are already exempted. Clever bastards.
Edited 2015-09-16 07:03 UTC
galvanash,
I have more faith in technological protections than legal ones. The trouble with legal protections is that the government can just ignore them. It’s interesting to see that the government gave itself an exception, but even without the exception in the past they’ve just ignored the privacy laws and encouraged companies to do the same by shielding all parties from accountability – who remembers when Bush conducted illegal mass surveillance with AT&T?
http://www.cnet.com/news/should-at-t-be-held-responsible-for-nsa-co…
Due to gag orders, we only know about these programs when they leak, but we really need to assume that all major service providers are “under the influence”. Services like Lavabit honorably chose to shut down in response to government wiretapping demands, but most corporations aren’t going to do that. One of the legal tricks that has arisen as a consequence of gag orders is the “canary”, whereby the canary signifies that an event has not happened. The absence of the canary implies that a company has been served with a gag order that it’s not allowed to talk about. In theory, the government can’t order a company to say something against it’s will – it can’t force a company to publish the canary.
Interestingly enough, apple itself had implemented such a canary…and the canary disappeared!
https://gigaom.com/2014/09/18/apples-warrant-canary-disappears-sugge…
Some people seem to think that apple just changed it’s mind about using the canary, but the fact that apple isn’t specifically coming out and saying that suggests that the canary is still in effect. If apple is still following canary protocol, it implies that apple did get served with national security orders that it’s not allowed to talk about.
Edit: I don’t think canaries have been challenged in court. It’d be tempting to use the “canary” flags in a more fine grained manor. With tons of canaries for different events and time periods.
Edited 2015-09-16 14:46 UTC
You can easily examine what your phone is sending and receiving using commercially available software – you don’t need to dig through code ( it’s sort of impractical to audit all code that goes on your phone anyway, as opposed to basically looking at the communication stream ).
If Apple is compelled by the courts to assist in an investigation their going to modify the server software to ‘wiretap’ a user, their not going to mess with your phone.
What’s looking at code going to do? The absence of blatant backdoors/spying/privacy-f*ckery is no basis for trust. There’s simply too many points of compromise along with the fact that certain government agencies will do whatever they want irrespective of the law. Those agencies operate outside of the law and tell the public they don’t have a right to know anything about anything they’re doing. You know, because “national security”.
But hey, if looking at source code makes a person feel better about it, go ahead.
Yes, because everyone knows how read code and validate it.
It doesn’t matter. Not everyone is a kernel developer, but there are kernel developers out there who can validate the work of others. Also, you can learn to read code or you can pay someone to do it for you. You do have a choice, unlike the alternative.
And at best doing any of those things is a red herring. If the code is clean you still can’t trust your information won’t be compromised elsewhere. You still can’t trust the code won’t be altered at some other point. You still can’t trust that the code you’re given is the code that is running. So, whether or not “the code” is clean, you’re still left without promise of privacy and security.
Instead of wanting so badly to place trust where it has no business being, maybe people should start from the premise that you can’t trust anyone. Not in an overly obsessive way, but to the point where they are pro-active about protecting their own privacy & info by not always defaulting to whatever is most convenient and crossing their fingers nothing bad happens.
How many servers, databases, banks, companies, agencies, governments, etc. have to be compromised before people wake up to the fact that information is neither safe & secure, not private. The expectation of those things only shows just how out of touch people are with the world in which they live.
Equally true is that as long as apple retains the ability to update client software at will, apple can rewrite it to reveal any private keys they want. To be fair, this vulnerability extends well beyond apple – it applies to all auto-updating software. We implicitly trust the vendors will not send malicious updates.
All in all though I think the article does a good job illustrating the perils of centralized crypto. The CONIKS link is an interesting idea:
https://eprint.iacr.org/2014/1004.pdf
To be fair, if it works the way Apple claims, their keys really are useless. All they have are public keys, i.e. they can encrypt things with any user’s key, but they can’t actually decrypt anything. Duping their servers would require Apple’s private keys, if those ever got out we all would probably know very quickly…
http://www.apple.com/business/docs/iOS_Security_Guide.pdf (see page 35)
They do use an envelop that is encrypted with their own keys (containing essentially the routing information), but the payload itself is encrypted with the recipients public key, and (according to their white paper) the only place the private key exists is on the recipient’s device.
The weakness that does exist (theoretically) is that messages are not bound to devices, they are bound to accounts – ethereal things that can be bound to many devices. In fact every time someone sends you an iMessage, it is encrypted and stored separately for every device bound to your account, waiting for said device to retrieve it.
Apple doesn’t know the private keys, but they do control the servers that manage registering new devices on an account. Who is to say that the process could not be done covertly, register a rouge device on someone’s account without their knowledge?
The weak link isn’t cryptography, its in device registration, and Apple doesn’t disclose anything about how that stuff works on their end. You don’t need someones private key to decrypt their messages – if you have a device bound to the same account you have your own keys that will work just fine…
galvanash,
(Emphasis mine)
How would you know? To be sure, the NSA reserves “active attacks” for specific targets. Unless a security researcher happens to be using a targeted device and independently confirms the keys of all his friends, he would not uncover the wiretap keys. Additionally, he would not uncover passive surveillance of bulk metadata either (who/where/when).
I think the article raises legitimate concerns. Going through a single provider will always mean we have to implicitly trust the provider. To really fix the problem we need a federated protocol that allowed users to independently confirm the authenticity of keys as well as binary executable signatures. Ideally authentication services would be located in radically different jurisdictions so as to minimize the possibility that they’re colluding.
The scariest thing at least in AT&T here in USA is that iMessage is actually tied to regular messaging. All messages sent iMessage to iMessage are in fact still accessible by me in another phone that can run AT&T proprietary messages application (especially in android). Definitively scary considering it downloads a crap ton of recent messages (I got tired of looking for older messages) to the app, no exception.
About the AT&T app I don’t know much.
The AT&T app in questions is treated as another ‘device’ and messages to and from it are also encrypted end to end – so fear not, AT&T can’t spy on you ( though they would be more likely to modify their app if the government asked I’d wager, so you probably don’t want to use it ).
It’s a little silly to worry about Apple, Google or Facebook helping the authorities.
If a government agency has the legal authority to surveil you they can simply take note of your password at a distance, pick you up, unlock your phone and read your messages. Or if they want longer term surveillance they can get your phone temporarily, use the password to authorize another device, and read your messages at their leisure.
It is far easier then taking some of the wealthiest companies in the world to court.
The difference is that they want it to be much easier, so that they can surveil everybody, instead of just a small subset of people who might be dangerous.
That’s what’s been happening, millions of people under surveilance, even people who have done absolutely nothing suspicious. And they want to be able to continue doing this.
About 95% of my messages consist of me typing variants of the following:
“Did you get a Guardian or should I?’
“near shop do we need milk?”
“will be home in ten minutes put kettle on”
“Forgot shopping list what do I need to get at supermarket?”
“stuck in traffic”
I pity any person forced to read though pages of that sort of stuff, if the government wants to spy on my messages I hope they paying the poor sap who has to read them all enough to compensate for the tedium.
The highpoint will be the occasional use of an odd selection of an usual emoji – usually caused by a typing error.
Interesting. When Google uses anonimised, grouped data for ads, you’re all up in arms about privacy.
When it affects Apple, it’s suddenly no big deal because you’re not interesting anyway.
The hypocrisy. It burns.
You do get so upset by such trivia, in this case a bit of light humour.
Here is what I think.
I don’t personally care if either Apple or Google collect my data.
I do think think that the business models of Apple are Google are different and that the differences means that data mining is central to Google’s revenue earning model and utterly trivial in relation to Apple’s revenue earning model. Hence if one is concerned about tech companies collecting ones data (which personally I am not) then you should be a lot more concerned about Google doing it, as they have to do it to stay in business, than Apple doing it.
Naturally the different roles and weight of data collection in the business models of Google and Apple means that Apple can always trump Google on privacy because losing revenue earnings by not collecting data is trivial for Apple and non-trivial for Google. This means that Apple will always try to ramp up privacy because it is easy to press Google on this particular flank.
if one was concerned about the government accessing ones data (which I am not) it would probably make more sense to worry about the company that is forced by its business model to build data collection deeply into its operation than the company whose business model makes data collection a trivial part of its business.
I can’t see any hypocrisy in any of that but if you can then please do elucidate.
The data Google uses is neither anonymized nor is it grouped. It would be useless to use anonymized/grouped ad targeting.
Google targets ads specifically to individuals ( which it can identify uniquely ) based on their browsing habits.
It would be a simple matter, for example, for an unfriendly government to compel google to provide the IP address any time they serve a democracy ad to an individual who searched for ‘democracy’.
But this really has nothing to do with iMessage, which is the topic of this article.
meanwhile, hit me up on TextSecure (supports federation at least in theory, though current environment doesn’t really lend itself to “turn up your own server!”) or XMPP running on my own gear with TLS’d federation enabled and OTR available.
*crickets*
Yes, I know message carbons on OTR’d exchanges suck. Yes, I know media exchange (images; video; files) via XMPP is still a crapshoot with various XEPs not really solving the problem. But I can dream, can’t I? and lament that the network effect means centralized services still dominate our instant messaging channels when federated options exist?
Edited 2015-09-15 16:17 UTC
justanothersysadmin,
Can’t upvote you, but absolutely right!
The popular services choose proprietary centralized designs because it benefits their business models. Users choose popular services based on network effects. Ergo, everyone gets pushed towards proprietary centralized platforms based on their popularity rather than their merit for users. I dream about what could be, and I lament what is.