Thursday, July 24, 2008

"Encryption Chip" Will Not End Piracy

Nolan Bushnell is full of it. There, I finally got that off my chest. It's arguably acerbic and rather rude, but it needed to be said. You have no idea how hard I've tried to avoid saying it. After all, he's the founder of Atari and a historical figure unto his own. He deserves certain respect for that.

The first time Nolan Bushnell claimed that the "encryption chip" will end piracy, I exercised due restraint. His statement reverberated all over the Internet, causing reactions that ranged from mild skepticism on one end of the spectrum to derision and disgust on the other.

So why am I writing now, more than two months later, if nobody believed him in the first place? In other words, why am I beating a dead horse? Partly, it's because he did it again and it pisses me off. Mostly, though, it's because I'm rather interested in copy protections and security; it's sort of a hobby of mine.

The most important lessons you learn in those two fields is that no protection is perfect and every solution spawns a new class of problems. This means that there will never be one final (technical) solution to the issue of piracy; there is no silver bullet. The experts from both fields are locked in an arms race with their adversaries. Once you've learned that, you'll have no problem recognizing that Nolan Bushnell is really just flogging his merch.

However, the issue runs deeper than that.

Copy Protection and Security

When I referred to copy protections and security I said "two fields", even though one can be considered a subset of the other; after all, copy protections are supposed to prevent the unauthorized use of software. Even though this is technically true, there are some drastic differences between the two.

An important difference is the level of cooperation from the users. When it comes to information security, the users actively cooperate with the protection systems, because it's in their best interest. You don't give access to your bank account to all your friends, do you?

On the other hand, copy protections often clash with the users' interests. Some of these interests are illegal, such as downloading a commercial game for free. But other interests are quite legal and legitimate. You added more memory to your computer? Odds are you might have to reactivate your Windows.

Another important difference is that a copy protection has to protect an application that lives on the user's computer. Unless we're talking about an MMOG, there's no server counterpart that executes a critical piece of code, without which the game can't work.

When you put those two things together, it becomes obvious why you can't make a perfect copy protection: you're relying on cooperation from a user that has complete control over his copy of your content or software. If that user doesn't want to cooperate, the best you can do is delay him. Even unbreakable ciphers won't help you, because sooner or later you'll have to decrypt the content and, when you do, the user will nab it.

But what if you could alter these conditions? You could make sure that there's a critical part of an application that executes somewhere where the user doesn't have control over it: that's what MMOGs do. The other option is to take away the control from the user.

Trust Controversy

Enter the "trusted computing". The first time I heard of it was back when Microsoft was touting Palladium. Back then, it sounded like a bad pun: a company found guilty in an antitrust lawsuit is proposing to build a "trusted computing platform" for its users. The irony was not lost on anyone and it provoked some enlightening responses from security experts.

Then, since nothing really seemed to happen and we didn't all suddenly wake up in some digital equivalent of 1984, I lost track of this topic for a while. I forgot about it until Nolan Bushnell started his TPM hype. A quick search engine query revealed that TPM stands for "Trusted Platform Module" and that it's the central component of "trusted computing".

What, then, is the so-called "trusted computing"? It's a technology that encompasses the following concepts:
  1. Endorsement key is a cryptographic key pair unique to one computer. The chief use for it is to prove the computer's identity.
  2. Secure I/O makes sure that the communication between the user and their software is secure and cannot be intercepted or altered.
  3. Memory curtaining protects those parts of memory that contain sensitive data (such as cryptographic keys) from unauthorized access, even by the operating system itself.
  4. Sealed storage binds data to the specific platform -- both hardware and software -- so that you cannot access it from any other platform.
  5. Remote attestation allows authorized parties to detect changes to the platform configuration in order to make sure that they meet the expected parameters; in other words, to prove that nobody tampered with the platform.
That's just a brief summary, to give you an idea of what we're talking about here. If you want more information, I recommend that you start at Wikipedia and then go on directly to the Trusted Computing Group site.

So, the core idea is to make computers more secure, by ensuring that no "untrusted" code has access to your stuff. At least, that's supposed to be the core idea. Unfortunately, there has been a great deal of confusion about the word "trust" in "trusted computing". Specifically, who is supposed to trust whom?

If you read Bruce Schneier's essay on "trusted computing", you'll notice that there's a good deal of controversy and confusion surrounding the issue. As one commenter so aptly put it, the only one not trusted seems to be the owner of the computer.

All Your Base

Each of the five concepts of "trusted computing" addresses a real security problem:
  1. Endorsement keys would be used to mitigate spoofing concerns in secure transactions by establishing the identity of each party involved.
  2. Secure I/O is supposed to avoid security breaches through techniques such as keylogging.
  3. Memory curtaining would make sure that sensitive information, such as cryptographic keys, is not allowed to "leak" somewhere where it could be extracted by malicious parties.
  4. Sealed storage would do a similar thing for sensitive information in non-volatile storage.
  5. Remote attestation could help network administrators easily detect intrusions and attacks on their machines.
Yet, after a closer look at them, it becomes evident that there's plenty of room for abuse. Imagine, for example, a system that enforces specific usage policies on your data:
  • It would use sealed storage to bind that data to a particular application or set of applications that you're allowed to use on that data.
  • It would employ memory curtaining to make sure you cannot extract that data directly from memory.
  • It would use secure I/O to make sure you cannot intercept it on its way somewhere else.
  • It would use remote attestation to report if you tamper with any part of the system.
  • And it would clearly identify you as a "culprit" to whoever is interested in enforcing those policies, if it possessed both your personal information and your endorsement key.
Is there any kind of usage policy that springs immediately to mind? There are two, actually: DRM and vendor lock-in. Ross Anderson describes several ways to abuse TC in his FAQ. Richard Stallman dedicates a whole chapter to this topic, in his book "Free Software, Free Society"; although slightly reminiscent of Book of Revelations in tone, it offers some very interesting insights.

Another interesting aspect of "trusted computing" is that it actually raises the stakes when it comes to information security: imagine a worm that successfully exploits a bug in the supposedly secure OS code to install a "trusted" rootkit? Talk about irony.

Pirates vs. Ninjas

Getting back to the original topic, does this mean that Nolan Bushnell is right? Is his "stealth encryption chip" really going to send all the pirates to the Davy Jones's Locker? Not by a long shot! Remember, if the software in question does not have some critical code running on some computer under control of some "authority", you can eventually break its copy protection.

When it comes to policy enforcement, the most important part of the "trusted computing" is the remote attestation. This is the way to ensure you won't tamper with the policy enforcement code. Incidentally, it requires you to be online. Now back up a couple of months and remember what happened when BioWare tried to pull that trick on its players.

If you believe that pirates can't hide behind this forever, think again. There are numerous valid reasons to resist the attempts to introduce an artificial dependency on Internet connection into software and all those reasons boil down to one: the connection is not always available, yet the artificial nature of the dependency means that the software doesn't actually need it to work properly.

Besides, people are not yet convinced that "trusted computing" will actually make things better. Plus, there are all sorts of concerns about privacy and also about practicality of the whole approach. Still, the Trusted Computing Group has been formed, the commercial motivation is there and "trusted computing" will keep rolling, until all doubts and concerns have been dealt with, one way or another.

What, then, is the worst blow "trusted computing" could deal to pirates? Indulge me a bit, as I let my imagination run wild and explore a "what-if" future.

Crack Dealers

Back when I was a little kid, learning what makes the cute, little Spectrum 48 tick, pirates were selling games on audio tapes. Today, pirated games are free. They are cracked for free, by enthusiasts; they are uploaded for free, on sites that survive on advertising or donations; and they are downloaded for free. I can still see some pirates in the streets, selling CDs and DVDs, but I'm sure they won't be buying any Ferraris with that money.

Fast forward to the time when "trusted computing" is in its full swing. To crack protections, pirates need very specialized software, maybe even some hardware, and a lot more effort than before. More than ever, piracy is something that only the select few can do.

However, it is also more lucrative than ever. As the usage policies are enforced more rigorously, the multitudes who used to obtain their entertainment for free now have to go and buy it. The big companies take advantage of that and the prices are even higher than before. You can buy an overpriced game directly from its publisher; or you can take a chance and go buy yourself a pirated copy from a local "software crack dealer". It's illegal, sure, but it's a lot cheaper and you can afford to buy a lot more.

Suddenly, pirates are not your everyday enthusiasts anymore; instead, they're rich criminals. They have bodyguards with guns. They have shady lawyers. They have money laundering enterprises and fake fronts and lots of connections. They know powerful people. In your efforts to eradicate a problem, you managed to make it mutate into something worse.

If this seems improbable and exaggerated, that's okay: I don't believe it's likely to happen. My point is that you should always be on the lookout for unintended consequences. It would be nice if, just for once, we asked ourselves where we're going before we get there.

3 comments:

Anonymous said...

This is a good analysis but I think you miss a couple of places.

The main one is that it is not necessary to log into the internet to check every few days. Instead, remote attestation is used at the time you download the activation key and/or encrypted software, which is then sealed to your TPM. From then on, the TPM only unseals and lets the code run if you are in a secure configuration, i.e. no hacked debuggers running, etc. So there is no need for special logins. (However I don't really think this is as big a deal as it was made out to be, after all we are all online almost all the time anyway.)

However you are right that the TPM protections can at present be easily defeated with a bit of hardware hacking, which will allow extracting keys and distributing cheat codes. It appears to be very difficult technically to defend against hardware hacks. Intel is adding TPMs to their I/O controller chipsets and that may make it a harder, but we will have to wait and see the details.

The one other place I disagree is the claim that if games are made more secure, prices will rise. I would expect prices to fall, since fewer people will pirate games and therefore the market is larger, allowing more volume to recoup game development expenses. The main thing keeping prices down is not competition from pirates, but competition from competitors. After all in the real world there is no equivalent to "piracy" for physical products (i.e. you can't get them for free), yet they are not infinitely expensive. Business can't sell Fruit Loops for too much or you will just buy Lucky Charms.

This notion that consumers are at the mercy of businesses where pricing is concerned is probably the biggest myth in this whole field of DRM. If you have ever run a business you should have quickly learned that from the other side of the table, quite the opposite is the case. Businesses take tremendous risks in setting their prices and are terrified of being undercut.

Unknown said...

Hi, Hal.

First of all, thanks for taking the time to read and to comment.

You're right that the remote attestation, as a concept, does not require periodical logins. It wasn't really my intention to make it sound like that.

As a matter of fact, using TPM to protect a one-time activation process is a lot more elegant and intelligent protection than what BioWare attempted to implement. On the other hand, preventing piracy through multiple activations would require the users to bind their identity to the endorsement key, which leads to the loss-of-privacy discussion.

I still believe that there will generally be no need for hardware hacking. If you manage to subvert the OS, you can propagate that subversion upwards, just like the trust is propagated upwards.

As for the prices, you are quite right about them being kept down by competitors. But I disagree on the effects of piracy on the prices. Before I go on, let me offer a disclaimer: in the game development startup that I co-own, I'm the least commercially savvy person ;-)

There are two points on which I disagree with you. One is that eliminating piracy would cause the market to grow substantially. I truly believe that people who play lots of pirated games wouldn't play more if piracy didn't exist. There are different motivators for playing pirated games, but I believe that the most important one is being unable to afford all the games you want.

If anything, eliminating piracy might cause the market to shrink some: I know lots of people who play pirated games and then go fit in their budget those that they appreciated the most.

The second point on which I disagree is the effect of (successful) DRM on prices, although here I disagree only partially. I believe that trusted computing would allow businesses to dictate complex content usage policies to such a degree that, even though the prices might seem low, the cost of consuming content could get a lot higher.

Right now, I rip songs from CDs I own, copy them to iPod and listen to them as often as I want to. If someone restricted me, for example, to listening to each one 10 times before I have to "rent" it again, this would not only cost me money, but also time to choose which songs I'll be likely to "rent" this month and how I'll fit that into my budget. It might sound far-fetched, but it's just an idea.

However, I believe that supply and demand would sort out that kind of stuff quite quickly, which is why I said I only disagree partially ;)

JC said...

Hello,

First of all, very good article indeed.

As a computer security hobbyist I must say you are absolutely right about the simple fact that there is no unhackable protection.

When talking about program's copy protection, the argument behind is so simple as to said that whatever gives the code it's value (relative to it's "real function") is not intrinsically linked by definition to the "security" features one tries to add... [Unless you are programming a security framework]. So these "aspects" are possible to be separated and the code may still comply it's "primary function".

I agree with the old cliche saying that "Security is not a feature", or to say it in a longer way "you cannot secure a system after it's initial design". Meaning security must be a primary design goal for it to be somewhat near effective. Consider most standard hardware platforms have evolved security features over the time as add-ons...

Also, the main concept when dealing with modern day security concepts (including cryptanalysis) is COST or using a more classic work FEASIBILITY.

All the practical mainstream cryptography used in computers today is in absolute terms breakable, the same thing applies to security practices, the race revolves against making the algorithm as unfeasible or expensive to break as possible. Good security practices also follows this characteristic design and try to make the system as cheap as possible, but the efforts to succeed in breaking-in as expensive as possible.

The past years has been the position of the hackers community that, instead to challenge the system as such, they try to find implementation or design flaws to make "exploits", as a chain is only as strong as it's weakest link.

There are practical ways to execute code in a secure way, using public key cryptography and a processor with a unique public key (processor-id) so code can be compiled and ciphered specifically for a particular processor and will run only on the specific processor it was ciphered to be run in. As the deciphering occurs inside the chip that also executes the ciphered code there is little possibility of tapping into the deciphered data. Some military platforms are using this idea today, but fortunately, around the publicly known architectures (including the ubiquitous x86) there is wide consensus on the cost of this solution is still to great to make it commercially viable (as things like performance, distribution, usability, portability and some legal issues are on the way).

With the cost of hardware and software continually declining I'm afraid the consensus will not last the many years I would like to.

In today's markets you may even find some security frameworks that have had some success. For many of them there is no public hack, so for practical purposes they are [still] secure.

The code to be executed doesn't even have to be ciphered; as happens in the XBOX 360 platform, for instance, just has to be properly signed to be executed. [there is a kernel exploit to access hypervisor mode but only affects specific old versions of the kernel so there are really few systems that may be compromised and so the exploit it's practically a curiosity].

Other fine example can be some satellite TV security systems (like NDS VideoGuard [used by DirectTV]), using public key cryptography as a way to secure the Key Distribution System.

See ya!