The first time Nolan Bushnell claimed that the "encryption chip" will end piracy, I exercised due restraint. His statement reverberated all over the Internet, causing reactions that ranged from mild skepticism on one end of the spectrum to derision and disgust on the other.
So why am I writing now, more than two months later, if nobody believed him in the first place? In other words, why am I beating a dead horse? Partly, it's because he did it again and it pisses me off. Mostly, though, it's because I'm rather interested in copy protections and security; it's sort of a hobby of mine.
The most important lessons you learn in those two fields is that no protection is perfect and every solution spawns a new class of problems. This means that there will never be one final (technical) solution to the issue of piracy; there is no silver bullet. The experts from both fields are locked in an arms race with their adversaries. Once you've learned that, you'll have no problem recognizing that Nolan Bushnell is really just flogging his merch.
However, the issue runs deeper than that.
Copy Protection and Security
When I referred to copy protections and security I said "two fields", even though one can be considered a subset of the other; after all, copy protections are supposed to prevent the unauthorized use of software. Even though this is technically true, there are some drastic differences between the two.
An important difference is the level of cooperation from the users. When it comes to information security, the users actively cooperate with the protection systems, because it's in their best interest. You don't give access to your bank account to all your friends, do you?
On the other hand, copy protections often clash with the users' interests. Some of these interests are illegal, such as downloading a commercial game for free. But other interests are quite legal and legitimate. You added more memory to your computer? Odds are you might have to reactivate your Windows.
Another important difference is that a copy protection has to protect an application that lives on the user's computer. Unless we're talking about an MMOG, there's no server counterpart that executes a critical piece of code, without which the game can't work.
When you put those two things together, it becomes obvious why you can't make a perfect copy protection: you're relying on cooperation from a user that has complete control over his copy of your content or software. If that user doesn't want to cooperate, the best you can do is delay him. Even unbreakable ciphers won't help you, because sooner or later you'll have to decrypt the content and, when you do, the user will nab it.
But what if you could alter these conditions? You could make sure that there's a critical part of an application that executes somewhere where the user doesn't have control over it: that's what MMOGs do. The other option is to take away the control from the user.
Enter the "trusted computing". The first time I heard of it was back when Microsoft was touting Palladium. Back then, it sounded like a bad pun: a company found guilty in an antitrust lawsuit is proposing to build a "trusted computing platform" for its users. The irony was not lost on anyone and it provoked some enlightening responses from security experts.
Then, since nothing really seemed to happen and we didn't all suddenly wake up in some digital equivalent of 1984, I lost track of this topic for a while. I forgot about it until Nolan Bushnell started his TPM hype. A quick search engine query revealed that TPM stands for "Trusted Platform Module" and that it's the central component of "trusted computing".
What, then, is the so-called "trusted computing"? It's a technology that encompasses the following concepts:
- Endorsement key is a cryptographic key pair unique to one computer. The chief use for it is to prove the computer's identity.
- Secure I/O makes sure that the communication between the user and their software is secure and cannot be intercepted or altered.
- Memory curtaining protects those parts of memory that contain sensitive data (such as cryptographic keys) from unauthorized access, even by the operating system itself.
- Sealed storage binds data to the specific platform -- both hardware and software -- so that you cannot access it from any other platform.
- Remote attestation allows authorized parties to detect changes to the platform configuration in order to make sure that they meet the expected parameters; in other words, to prove that nobody tampered with the platform.
So, the core idea is to make computers more secure, by ensuring that no "untrusted" code has access to your stuff. At least, that's supposed to be the core idea. Unfortunately, there has been a great deal of confusion about the word "trust" in "trusted computing". Specifically, who is supposed to trust whom?
If you read Bruce Schneier's essay on "trusted computing", you'll notice that there's a good deal of controversy and confusion surrounding the issue. As one commenter so aptly put it, the only one not trusted seems to be the owner of the computer.
All Your Base
Each of the five concepts of "trusted computing" addresses a real security problem:
- Endorsement keys would be used to mitigate spoofing concerns in secure transactions by establishing the identity of each party involved.
- Secure I/O is supposed to avoid security breaches through techniques such as keylogging.
- Memory curtaining would make sure that sensitive information, such as cryptographic keys, is not allowed to "leak" somewhere where it could be extracted by malicious parties.
- Sealed storage would do a similar thing for sensitive information in non-volatile storage.
- Remote attestation could help network administrators easily detect intrusions and attacks on their machines.
- It would use sealed storage to bind that data to a particular application or set of applications that you're allowed to use on that data.
- It would employ memory curtaining to make sure you cannot extract that data directly from memory.
- It would use secure I/O to make sure you cannot intercept it on its way somewhere else.
- It would use remote attestation to report if you tamper with any part of the system.
- And it would clearly identify you as a "culprit" to whoever is interested in enforcing those policies, if it possessed both your personal information and your endorsement key.
Another interesting aspect of "trusted computing" is that it actually raises the stakes when it comes to information security: imagine a worm that successfully exploits a bug in the supposedly secure OS code to install a "trusted" rootkit? Talk about irony.
Pirates vs. Ninjas
Getting back to the original topic, does this mean that Nolan Bushnell is right? Is his "stealth encryption chip" really going to send all the pirates to the Davy Jones's Locker? Not by a long shot! Remember, if the software in question does not have some critical code running on some computer under control of some "authority", you can eventually break its copy protection.
When it comes to policy enforcement, the most important part of the "trusted computing" is the remote attestation. This is the way to ensure you won't tamper with the policy enforcement code. Incidentally, it requires you to be online. Now back up a couple of months and remember what happened when BioWare tried to pull that trick on its players.
If you believe that pirates can't hide behind this forever, think again. There are numerous valid reasons to resist the attempts to introduce an artificial dependency on Internet connection into software and all those reasons boil down to one: the connection is not always available, yet the artificial nature of the dependency means that the software doesn't actually need it to work properly.
Besides, people are not yet convinced that "trusted computing" will actually make things better. Plus, there are all sorts of concerns about privacy and also about practicality of the whole approach. Still, the Trusted Computing Group has been formed, the commercial motivation is there and "trusted computing" will keep rolling, until all doubts and concerns have been dealt with, one way or another.
What, then, is the worst blow "trusted computing" could deal to pirates? Indulge me a bit, as I let my imagination run wild and explore a "what-if" future.
Back when I was a little kid, learning what makes the cute, little Spectrum 48 tick, pirates were selling games on audio tapes. Today, pirated games are free. They are cracked for free, by enthusiasts; they are uploaded for free, on sites that survive on advertising or donations; and they are downloaded for free. I can still see some pirates in the streets, selling CDs and DVDs, but I'm sure they won't be buying any Ferraris with that money.
Fast forward to the time when "trusted computing" is in its full swing. To crack protections, pirates need very specialized software, maybe even some hardware, and a lot more effort than before. More than ever, piracy is something that only the select few can do.
However, it is also more lucrative than ever. As the usage policies are enforced more rigorously, the multitudes who used to obtain their entertainment for free now have to go and buy it. The big companies take advantage of that and the prices are even higher than before. You can buy an overpriced game directly from its publisher; or you can take a chance and go buy yourself a pirated copy from a local "software crack dealer". It's illegal, sure, but it's a lot cheaper and you can afford to buy a lot more.
Suddenly, pirates are not your everyday enthusiasts anymore; instead, they're rich criminals. They have bodyguards with guns. They have shady lawyers. They have money laundering enterprises and fake fronts and lots of connections. They know powerful people. In your efforts to eradicate a problem, you managed to make it mutate into something worse.
If this seems improbable and exaggerated, that's okay: I don't believe it's likely to happen. My point is that you should always be on the lookout for unintended consequences. It would be nice if, just for once, we asked ourselves where we're going before we get there.