Jonathan Zittrain would like to continue his conversation with Groklaw. He says your comments so far have been incredibly helpful to him. He's written up a new FAQ just for Groklaw, explaining what he wrote so that those of you that had trouble understanding his article or had style issues will be able to give him your input without any barriers. He also answers some of the questions you asked him. He really appreciates your comments and emails and would like more, and he tells me he only just today finished answering the last email from you, so that's saying something, so please take a look and give him any additional thoughts.
Just as a reminder Zittrain is Professor of Internet Governance and Regulation, Oxford University and Jack N. & Lillian R. Berkman Visiting Professor of Entrepreneurial Legal Studies, Harvard Law School.
If there are other lawyers who would like input from the tech community here at Groklaw, just let us know. One of the things Groklaw is aiming for is to build a bridge between lawyers and the tech community. We understand how important it is for lawyers to understand the technical aspects of intellectual property law cases, and we're happy to help. Of course, you need to really want to know what readers think, because Groklaw folks will tell you. But if you're like Jonathan Zittrain and you can take a licking and keep on ticking, you will surely get helpful information. We can provide input either privately or as here, in public, for the stout at heart.
And with that, I'll let Jonathan speak. He answers some of your main criticisms, and he defines what he meant by "generative," because so many of you asked about that. And now that he clarifies, you'll likely be interested in what he writes about Tivo, which relates to our other main topic of the week.
Dear Groklaw readers,
Thank you for checking out my paper. The responses were overwhelming, and extremely helpful, both on style and substance. Here's a FAQ I've been working on to collect and respond to what I've seen -- and to try to state the thesis succinctly for those who didn't make it much past the abstract. You'll see that I'm essentially sticking to my guns, but the debate is helping me to think much more clearly about just what I mean to say. I'm hoping the book can be that much better than the paper thanks to this. I look forward to continuing the discussion.
Groklaw Generative Internet FAQ 1.0
31 July 2006
1. Your paper is densely written and to many people it's, uh, unreadable. Can you explain its point in a straightforward paragraph?
I am worried about the "appliancization" of the Internet. I see a possibility that the physical devices that mainstream Internet users commonly use to access the network will be much more limited in the outside code that they can run, and more directive to users about what to do or where to go online. In other words, the Internet will become as boring as television, and as limited in the audiences who can contribute to it. This concern connects with free software issues, but it is not identical to them. This is because, for example, a box can be built using free software that is not readily modifiable by mainstream users (TiVo is a good example), while PCs running proprietary operating systems can be nearly completely reprogrammed and repurposed with a click or two, or a CD-ROM. The more that mainstream users access the network using information appliances, the fewer opportunities there will be to easily deploy innovative new applications, especially those whose value increases as more people use them (e.g. Internet telephony or filesharing networks). I see reasons why regulators might want to push information appliances, since they are more regulable than open PCs (consider the way that TiVo can set up its box so that commercials can't be automatically skipped, or certain flagged shows can last only a short period of time).
2. PC sales seem to be doing quite well, and TiVos seem like extra appliances in a house, not substitutes. WebTV was a joke. So is this a real problem?
It could be. I see more and more TiVo-like devices out there -- xBoxes, mobile phones, Blackberries, and iPods. None is easily reprogrammed by their users, even if it's a matter of them wanting to run someone else's code rather than write it themselves. Further, PCs themselves are heading in a more locked-down direction. The paper argues that a set of worries loosely grouped under "security" will cause users to favor such lockdown, even against their own interests. I see the virus problem as part of this phenomenon, and poor OS design can make PCs more vulnerable to viruses than they should be. But there's a more fundamental trade-off at work here: if users are to be in control of their machines, they need to make decisions about what should and shouldn't run on them, and many users don't have enough information or patience to make that decision. Interventions having to do with secure application environments, access control lists, or a better idea of what scope of privileges a particular piece of code is supposed to have, can be helpful, but they don't solve this bigger problem. If we limit what outside code can do, in order to minimize the harm it can wreak as a virus, we are also raising barriers to how easily *good* code can come to run comfortably on a user's machine. The magic of the Web is that you can go to a page and click to download and run something immediately -- it's a piece of functionality that comes at the cost, today, of users potentially making the wrong choices. Log them in as mere power users rather than admins for OSs that easily support such functionality, and it's better, but not completely solved. Asking that a user logout and back in as a superuser in order to approve certain code can be a good idea, but it also greatly complicates the steps a user needs to take to make new code work, especially new code that makes big changes to the way the machine operates, or that seeks to change the way another piece of software (and its data) work on the machine. The problem is analagous to that of firewalls: they're helpful against some kinds of attacks, but at end of the day either (1) the user must decide whether to let a particular program open a port, or (2) the user must be disempowered from making such decisions so he or she doesn't make any mistakes, moving the firewall upstream -- and limiting user freedom. Bad programs might themselves still find ways around blocked ports or NAT, and in the meantime non-bad programs such as Skype have to go to greater lengths to interoperate, making for more effort and investment to build them.
3. What does "generative" mean?
I wanted a label to capture the idea of a general-purpose platform in someone's hands that can be easily repurposed by that person, using code written by others. I chose the word generative, since it gets at the idea of something that's able to produce new things. To me, both the Internet and the PCs traditionally hooked up to it are that way, in particular because they allow people to build and distribute new code (and have that code itself use the network) with no major barriers. There are lots of academics in the cyberlaw space worried about keeping the Internet open, and I'm more or less in that camp. But no one has focused on whether the PC itself will stay open. Without an open PC, the ability of the Internet to generate new things is severely limited. I know that some people think that so long as any device has a browser and can get to anywhere on the Web, there's little need to reprogram the device, but I think that's short-sighted. The browser is just one way to use the Internet.
4. What aren't you harsher on Microsoft, or more focused on the difference between Windows and those OSs that don't share its vulnerabilities?
Because I see the real problem as how to empower users to run the code they want, allowing that code to change as much as it cares to on the users' machines, without having bad code get in and do bad things to the machine. Choice of OS can help with some security vulnerabilities -- such as how often the user might unknowingly encounter self-executing content that then slips its leash and does more to the machine or its data than the OS designer intended -- but it does not alone solve this larger problem. (For those who asked: I run Windows, OS X, and Debian GNU/Linux.) For the purposes of my argument, which seeks to shed light on a way of thinking about these problems that isn't the same as the standard free vs. proprietary debates (on which I've also weighed in, at http://papers.ssrn.com/sol3/
papers.cfm?abstract_id=529862), I want to highlight that *today's* MS system still allows nearly any plank of functionality to be tweaked or rewritten without gatekeeping. Whether that will be true for Vista or future OSs is exactly the question I want to raise. I'm worried that the next MS PC OS will be a la XBox, with only licensed coders being able to distribute their apps, and the public not gravitating towards alternatives -- or even knowing that they exist.
4.5. Isn't OS monoculture really the problem?
I think monoculture is a serious problem, but monoculture can transcend any given OS -- consider the ASN.1 or Kerberos bugs that affect multiple systems since they inhere in the points of interoperability. In that sense the Net itself is a monoculture. This doesn't take away from the monoculture point, which I think is powerful, but the problem I point to is a more fundamental one. Aside from the bugs that are magnified when the buggy system is run so widely, we face the problem that e2e suggests users ought to get to make the call as to what to run on their powerful machines -- and then make the wrong call.
5. Suppose we assume you're right about the general problem. What do you say should be done?
I suggest a few things in the paper.
a. I'd like to get ISPs to take action against obviously compromised machines on their network (more on that in #8 below).
b. I think that virtual machine environments -- simplified in the draft to a "red/green" architecture -- may offer some technical help, since users could then run "risky" software and not find it catastrophic if the software turned out to be destructive. That's because data from important apps would be in the "green zone," and the red zone would have a quick "reinstall me to my original pristine state" switch. This has some of the drawbacks I discuss in #2 above about secure application environments, and it raises the issue of there needing to be a checkpoint Charlie to decide when to move data from one zone to the other. Plus there's the problem of deciding what belongs in what zone, and deciding who makes that decision. The comments so far here have been very helpful to my thinking about this solution.
c. Solving the sorts of problems that get the regulators up in arms -- "they're pirating all my movies" -- will reduce pressure to produce locked-down PCs. I don't know how many people here followed the Grokster case in the U.S.; I think that case is a prelude to a larger question the regulators will soon ask themselves: if people can build software that automatically updates itself, why shouldn't we force them to, and that way we can make them update software that turns out to have bad uses or effects? I'm against this, but I think it will be very tempting. Also, one might see regulators asking OS or anti-virus makers to try to "cancel" bad applications, treating them as if they were spyware/viruses/etc. If this were done to wildly popular software there'd be an obvious backlash -- people would rush to uninstall the antivirus software that was commandeered that way -- but in many environments like libraries, schools, and corporations, that choice may not exist for the end-user.
d. I'm interested in distributed solutions to the badware problem. I propose a distributed application, not unlike SETI@home, that users could download and that would monitor certain innocuous demographics -- number of crashes, restarts, hours in use, code running on the machine. Perhaps sometimes asking people how happy in general they are with the way their machines are behaving. With n large enough, we can statistically say that all else equal, a particular piece of code tends to make the machine (or its user) happier or not. (I realize that it might find that after TurboTax is installed people might get unhappy for reasons not having to do with the quality of the code.) Users could, before running certain code, query the system about how long the code had been in the wild, and how many machines were currently running it. Risk-averse users might wait before loading a program that didn't exist last week but now seems to be all the rage.
6. Are you in favor of DRM?
No, I'm not. I think DRM gets in the way of interoperability, and worse, it embeds assumptions about how people will or won't want to use the material so protected. To me, the generative qualities of the Net and PC have shown again and again that unexpected uses from unexpected corners can be socially valuable and commercially significant too. I think we're starting to see that in the tired debate over filesharing and copyrighted music. The really important functionality to preserve isn't about copying songs wholesale; it's about allowing people to create mash-ups and other derivative works that are valuable unto themselves and that can enhance the value of the original. Systems like iTunes are convenient but limited. They assume that what people want is a big remote control to watch or listen to what professionals produce. (The inclusion of podcasts in iTunes is a happy feature in the opposite direction, but Apple reserves the right, and will no doubt use it, to determine which podcasts to promote and which to exclude entirely.)
7. Why didn't you talk about IPv6?
I'm not sure I see how the introduction of IPv6 changes any of the dynamics I'm concerned about. I'm not even sure IPv6 will take hold, since enough hacks have been made to keep IPv4 beating. I'd like to know more from those who think it makes a difference here. Same for Internet2. I'm on Internet2 myself some of the time, depending on where I'm connecting, but I don't see it as affecting the PC/Internet appliance situation.
8. Where are you on net neutrality? Your paper seems to be against it.
No, not exactly. I'm sympathetic to worries that ISPs, not just wanting to be commodity carriers of bits, will try to privilege some bits over others, and that may end up squeezing out the little guy who has bits to offer but isn't worth the ISP's trouble to deal with. I do think that ISPs could help in the medium term with the problem of bad code -- for example, by disconnecting spam- or virus-spewing machines from the networks, and helping their users to fix them. If ISPs did this, it would not only help with the spam and virus problem, but it would make non-locked-down PCs a more attractive choice for users. I see "shallow" fixes like these as better for maintaining generativity than some of the possible alternatives, like locking down PCs. A locked-down PC, or one that listens to a gatekeeper like a virus-detection company to decide what to allow and what not, is topologically in the "middle" of the network rather than an endpoint for these purposes, since the user can't easily affect how it operates.
9. Are you "of an academic neo-con bent who thinks the whole 'net should be controlled by elitists, and not left to the vulgar techies who just got lucky this one time"?
No. I think the "vulgar techies" have a continuing role to play. I'm concerned that many come up with solutions for themselves and fellow "vulgar techies", without those solutions being easily adopted by Internet masses. In that sense, I'm anti-elitist. I think we need an open Net for everyone, not just for the tech types.
10. What's with the shorl links? Shorl is stupid.
I will never use shorl again. And if I do, which I won't, I will encapsulate the right url into it.
Professor of Internet Governance and Regulation
Jack N. and Lillian R. Berkman Visiting Professor for Entrepreneurial Legal Studies
Harvard Law School