Blown to Bits
Your Life, Liberty, and Happiness After the Digital Explosion
Hal Abelson Ken Ledeen Harry Lewis
Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid • Cape Town • Sydney • Tokyo • Singapore • Mexico City
Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals.
The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.
The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests. For more information, please contact:
U.S. Corporate and Government Sales (800) 382-3419 corpsales@pearsontechgroup.com
For sales outside the United States, please contact:
International Sales international@pearson.com
Visit us on the Web: www.informit.com/aw
Library of Congress Cataloging-in-Publication Data:
Abelson, Harold.
Blown to bits : your life, liberty, and happiness after the digital explosion / Hal Abelson, Ken Ledeen, Harry Lewis.p. cm.
ISBN 0-13-713559-9 (hardback : alk. paper) 1. Computers and civilization. 2. Information technology—Technological innovations. 3. Digital media. I. Ledeen, Ken, 1946- II. Lewis, Harry R. III. Title.
QA76.9.C66A245 2008 303.48’33—dc222008005910
Copyright © 2008 Hal Abelson, Ken Ledeen, and Harry Lewis
This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike
3.0 United States License. To view a copy of this license visit http://creativecommons.org/licenses/by-nc-sa/3.0/us/ or send a letter to Creative Commons 171 Second Street, Suite 300, San Francisco, California, 94105, USA.
For information regarding permissions, write to: Pearson Education, Inc.
Rights and Contracts Department 501 Boylston Street, Suite 900
Boston, MA 02116
Fax (617) 671 3447
ISBN-13: 978-0-13-713559-2
ISBN-10: 0-13-713559-9
Text printed in the United States on recycled paper at RR Donnelley in Crawfordsville, Indiana. Third printing December 2008
This Book Is Safari Enabled
The Safari® Enabled icon on the cover of your favorite technology book means the book is available through Safari Bookshelf. When you buy this book, you get free access to the online edition for 45 days.
Safari Bookshelf is an electronic reference library that lets you easily search thousands of technical books, find code samples, download chapters, and access technical information whenever and wherever you need it.
To gain 45-day Safari Enabled access to this book:
• Go to http://www.informit.com/onlineedition
• Complete the brief registration form
• Enter the coupon code 9SD6-IQLD-ZDNI-AGEC-AG6L
If you have difficulty registering on Safari Bookshelf or accessing the online edition, please e-mail customer-service@safaribooksonline.com.
Editor in Chief
Mark Taub
Acquisitions Editor
Greg Doench
Development Editor
Michael Thurston
Managing Editor
Gina Kanouse
Senior Project Editor
Kristy Hart
Copy Editor
Water Crest Publishing, Inc.
Indexer
Erika Millen
Proofreader
Williams Woods Publishing Services
Publishing Coordinator
Michelle Housley
Interior Designer and Composition
Nonie Ratcliff
Cover Designer
Chuti Prasertsith
CHAPTER 2
Naked in the Sunlight
Privacy Lost, Privacy Abandoned
1984 Is Here, and We Like It
On July 7, 2005, London was shaken as suicide bombers detonated four explosions, three on subways and one on a double-decker bus. The attack on the transit system was carefully timed to occur at rush hour, maximizing its destructive impact. 52 people died and 700 more were injured.
Security in London had already been tight. The city was hosting the G8 Summit, and the trial of fundamentalist cleric Abu Hamza al-Masri had just begun. Hundreds of thousands of surveillance cameras hadn’t deterred the terrorist act, but the perpetrators were caught on camera. Their pictures were sent around the world instantly. Working from 80,000 seized tapes, police were able to reconstruct a reconnaissance trip the bombers had made two weeks earlier.
George Orwell’s 1984 was published in 1948. Over the subsequent years, the book became synonymous with a world of permanent surveillance, a soci- ety devoid of both privacy and freedom:…there seemed to be no color in anything except the posters that were plastered everywhere. The black-mustachio’d face gazed down from every commanding corner. There was one on the house front immedi- ately opposite. BIG BROTHER IS WATCHING YOU …
The real 1984 came and went nearly a quarter century ago. Today, Big Brother’s two-way telescreens would be amateurish toys. Orwell’s imagined London had cameras everywhere. His actual city now has at least half a million. Across the UK, there is one surveillance camera for every dozen people. The average Londoner is photographed hundreds of times a day by electronic eyes on the sides of buildings and on utility poles.
Yet there is much about the digital world that Orwell did not imagine. He did not anticipate that cameras are far from the most pervasive of today’s tracking technologies. There are dozens of other kinds of data sources, and the data they produce is retained and analyzed. Cell phone companies know not only what numbers you call, but where you have carried your phone. Credit card companies know not only where you spent your money, but what you spent it on. Your friendly bank keeps electronic records of your transactions not only to keep your balance right, but because it has to tell the government if you make huge withdrawals. The digital explosion has scattered the bits of our lives everywhere: records of the clothes we wear, the soaps we wash with, the streets we walk, and the cars we drive and where we drive them. And although Orwell’s Big Brother had his cameras, he didn’t have search engines to piece the bits together, to find the needles in the haystacks. Wherever we go, we leave digital footprints, while computers of staggering capacity reconstruct our movements from the tracks. Computers re-assemble the clues to form a comprehensive image of who we are, what we do, where we are doing it, and whom we are discussing it with.
Perhaps none of this would have surprised Orwell. Had he known about electronic miniaturization, he might have guessed that we would develop an astonishing array of tracking technologies. Yet there is something more fundamental that distinguishes the world of 1984 from the actual world of today. We have fallen in love with this always-on world. We accept our loss of privacy in exchange for efficiency, convenience, and small price discounts. According to a 2007 Pew/Internet Project report, “60% of Internet users say they are not worried about how much information is available about them online.” Many of us publish and broadcast the most intimate moments of our lives for all the world to see, even when no one requires or even asks us to do so. 55% of teenagers and 20% of adults have created profiles on social networking web sites. A third of the teens with profiles, and half the adults, place no restrictions on who can see them.
In Orwell’s imagined London, only O’Brien and other members of the Inner Party could escape the gaze of the telescreen. For the rest, the constant gaze was a source of angst and anxiety. Today, we willingly accept the gaze. We either don’t think about it, don’t know about it, or feel helpless to avoid it except by becoming hermits. We may even judge its benefits to outweigh its risks. In Orwell’s imagined London, like Stalin’s actual Moscow, citizens spied on their fellow citizens. Today, we can all be Little Brothers, using our search engines to check up on our children, our spouses, our neighbors, our col- leagues, our enemies, and our friends. More than half of all adult Internet users have done exactly that. The explosive growth in digital technologies has radically altered our expectations about what will be private and shifted our thinking about what should be private. Ironically, the notion of privacy has become fuzzier at the same time as the secrecy-enhancing technology of encryption has become widespread.
Indeed, it is remarkable that we no longer blink at intrusions that a decade ago would have seemed shocking. Unlike the story of secrecy, there was no single technological event that caused the change, no privacy-shattering breakthrough—only a steady advance on several technological fronts that ultimately passed a tipping point. Many devices got cheaper, better, and smaller. Once they became useful consumer goods, we stopped worrying about their uses as surveillance devices. For example, if the police were the only ones who had cameras in their cell phones, we would be alarmed. But as long as we have them too, so we can send our friends funny pictures from parties, we don’t mind so much that others are taking pictures of us. The social evolution that was supported by consumer technologies in turn made us more accepting of new enabling technologies; the social and technological evolutions have proceeded hand in hand. Meanwhile, international terrorism has made the public in most democracies more sympathetic to intrusive measures intended to protect our security. With corporations trying to make money from us and the government trying to protect us, civil libertarians are a weak third voice when they warn that we may not want others to know so much about us.
So we tell the story of privacy in stages. First, we detail the enabling technologies, the devices and computational processes that have made it easy and convenient for us to lose our privacy—some of them familiar technologies, and some a bit more mysterious. We then turn to an analysis of how we have lost our privacy, or simply abandoned it. Many privacy-shattering things have happened to us, some with our cooperation and some not. As a result, the sense of personal privacy is very different today than it was two decades ago. Next, we discuss the social changes that have occurred cultural shifts that were facilitated by the technological diffusion, which in turn made new technologies easier to deploy. And finally we turn to the big question: What does privacy even mean in the digitally exploded world? Is there any hope of keeping anything private when everything is bits, and the bits are stored, copied, and moved around the world in an instant? And if we can’t—or won’t—keep our personal information to ourselves anymore, how can we make ourselves less vulnerable to the downsides of living in such an exposed world? Standing naked in the sunlight, is it still possible to protect ourselves against ills and evils from which our privacy used to protect us?
Footprints and FingerprintsAs we do our daily business and lead our private lives, we leave footprints and fingerprints. We can see our footprints in mud on the floor and in the sand and snow outdoors. We would not be surprised that anyone who went to the trouble to match our shoes to our footprints could determine, or guess, where we had been. Fingerprints are different. It doesn’t even occur to us that we are leaving them as we open doors and drink out of tumblers. Those who have guilty consciences may think about fingerprints and worry about where they are leaving them, but the rest of us don’t.
In the digital world, we all leave both electronic footprints and electronic fingerprints data trails we leave intentionally, and data trails of which we are unaware or unconscious. The identifying data may be useful for forensic purposes. Because most of us don’t consider ourselves criminals, however, we tend not to worry about that. What we don’t think about is that the various small smudges we leave on the digital landscape may be useful to someone else—someone who wants to use the data we left behind to make money or to get something from us. It is therefore important to understand how and where we leave these digital footprints and fingerprints.
Smile While We Snap!Big Brother had his legions of cameras, and the City of London has theirs today. But for sheer photographic pervasiveness, nothing beats the cameras in the cell phones in the hands of the world’s teenagers. Consider the alleged misjudgment of Jeffrey Berman. In early December 2007, a man about 60 years old committed a series of assaults on the Boston public transit system, groping girls and exposing himself. After one of the assaults, a victim took out her cell phone. Click! Within hours, a good head shot was up on the Web and was shown on all the Boston area television stations. Within a day, Berman was under arrest and charged with several crimes. “Obviously we, from time to time, have plainclothes officers on the trolley, but that’s a very difficult job to do,” said the chief of the Transit Police. “The fact that this girl had the wherewithal to snap a picture to identify him was invaluable.”
That is, it would seem, a story with a happy ending, for the victim at least. But the massive dissemination of cheap cameras coupled with universal access to the Web also enables a kind of vigilante justice—a ubiquitous Little- Brotherism, in which we can all be detectives, judges, and corrections officers. Mr. Berman claims he is innocent; perhaps the speed at which the teenager’s snapshot was disseminated unfairly created a presumption of his guilt. Bloggers can bring global disgrace to ordinary citizens.
In June 2005, a woman allowed her dog to relieve himself on a Korean subway, and subsequently refused to clean up his mess, despite offers from others to help. The incident was captured by a fellow passenger and posted online. She soon became known as “gae-ttong-nyue” (Korean for “puppy poo girl”). She was identified along with her family, was shamed, and quit school. There is now a Wikipedia entry about the incident. Before the digital explosion before bits made it possible to convey information instantaneously,everywhere her actions would have been embarrassing and would have been known to those who were there at the time. It is unlikely that the story would have made it around the world, and that it would have achieved such notoriety and permanence.
Still, in these cases, at least someone thought someone did something wrong. The camera just happened to be in the right hands at just the right moment. But looking at images on the Web is now a leisure activity that any- one can do at any time, anywhere in the world. Using Google Street View, you can sit in a café in Tajikistan and identify a car that was parked in my drive- way when Google’s camera came by (perhaps months ago). From Seoul, you can see what’s happening right now, updated every few seconds, in Picadilly Circus or on the strip in Las Vegas. These views were always available to the public, but cameras plus the Web changed the meaning of “public.”
And an electronic camera is not just a camera. Harry Potter and the Deathly Hallows is, as far as anyone knows, the last book in the Harry Potter series. Its arrival was eagerly awaited, with lines of anxious Harry fans stretching around the block at bookstores everywhere. One fan got a pre- release copy, painstakingly photographed every page, and posted the entire book online before the official release. A labor of love, no doubt, but a blatant copyright violation as well. He doubtless figured he was just posting the pixels, which could not be traced back to him. If that was his presumption, he was wrong. His digital fingerprints were all over the images.
Digital cameras encode metadata along with the image. This data, known as the Exchangeable Image File Format (EXIF), includes camera settings (shutter speed, aperture, compression, make, model, orientation), date and time, and, in the case of our Harry Potter fan, the make, model, and serial number of his camera (a Canon Rebel 350D, serial number 560151117). If he registered his camera, bought it with a credit card, or sent it in for service, his identity could be known as well.
Knowing Where You AreGlobal Position Systems (GPSs) have improved the marital lives of count- less males too stubborn to ask directions. Put a Garmin or a Tom Tom in a car, and it will listen to precisely timed signals from satellites reporting their positions in space. The GPS calculates its own location from the satellites’ locations and the times their signals are received. The 24 satellites spinning 12,500 miles above the earth enable your car to locate itself within 25 feet, at a price that makes these systems popular birthday presents.
If you carry a GPS-enabled cell phone, your friends can find you, if that its what you want. If your GPS-enabled rental car has a radio transmitter, you can be found whether you want it or not. In 2004, Ron Lee rented a car from Payless in San Francisco. He headed east to Las Vegas, then back to Los Angeles, and finally home. He was expecting to pay $150 for his little vaca- tion, but Payless made him pay more—$1,400, to be precise. Mr. Lee forgot to read the fine print in his rental contract. He had not gone too far; his con- tract was for unlimited mileage. He had missed the fine print that said, “Don’t leave California.” When he went out of state, the unlimited mileage clause was invalidated. The fine print said that Payless would charge him $1 per Nevada mile, and that is exactly what the company did. They knew where he was, every minute he was on the road.
A GPS will locate you anywhere on earth; that is why mountain climbers carry them. They will locate you not just on the map but in three dimensions, telling you how high up the mountain you are. But even an ordinary cell phone will serve as a rudimentary positioning system. If you are traveling in settled territory—any place where you can get cell phone coverage—the signals from the cell phone towers can be used to locate you. That is how Tanya Rider was found (see Chapter 1 for details). The location is not as precise as that supplied by a GPS—only within ten city blocks or so—but the fact that it is possible at all means that photos can be stamped with identifying information about where they were shot, as well as when and with what camera.
Knowing Even Where Your Shoes AreA Radio Frequency Identification tag—RFID, for short—can be read from a distance of a few feet. Radio Frequency Identification is like a more elaborate version of the familiar bar codes that identify products. Bar codes typically identify what kind of thing an item is—the make and model, as it were. Because RFID tags have the capacity for much larger numbers, they can provide a unique serial number for each item: not just “Coke, 12 oz. can” but “Coke can #12345123514002.” And because RFID data is transferred by radio waves rather than visible light, the tags need not be visible to be read, and the sensor need not be visible to do the reading.
RFIDs are silicon chips, typically embedded in plastic. They can be used to tag almost anything (see Figure 2.1). “Prox cards,” which you wave near a sensor to open a door, are RFID tags; a few bits of information identifying you are transmitted from the card to the sensor. Mobil’s “Speedpass” is a little RFID on a keychain; wave it near a gas pump and the pump knows whom to charge for the gasoline. For a decade, cattle have had RFIDs implanted in their flesh, so individual animals can be tracked. Modern dairy farms log the milk production of individual cows, automatically relating the cow’s identity to its daily milk output. Pets are commonly RFID-tagged so they can be reunited with their owners if the animals go missing for some reason. The possibility of tagging humans is obvious, and has been proposed for certain high-security applications, such as controlling access to nuclear plants.
But the interesting part of the RFID story is more mundane—putting tags in shoes, for example. RFID can be the basis for powerful inventory tracking systems.
RFID tags are simple devices. They store a few dozen bits of information, usually unique to a particular tag. Most are passive devices, with no batteries, and are quite small. The RFID includes a tiny electronic chip and a small coil, which acts as a two-way antenna. A weak current flows through the coil when the RFID passes through an electromagnetic field—for example, from a scanner in the frame of a store, under the car- pet, or in someone’s hand. This feeble current is just strong enough to power the chip and induce it to transmit the identifying information. Because RFIDs are tiny and require no connected power source, they are easily hidden. We see them often as labels affixed to products; the one in Figure 2.1 was between the pages of a book bought from a bookstore. They can be almost undetectable.
FIGURE 2.1 An RFID found between the pages of a book. A bookstore receiving a box of RFID-tagged books can check the incoming shipment against the order without opening the carton. If the books and shelves are scanned during stocking, the cash register can identify the section of the store from which each purchased copy was sold.
RFIDs are generally used to improve record-keeping, not for snooping. Manufacturers and merchants want to get more information, more reliably, so they naturally think of tagging merchandise. But only a little imagination is required to come up with some disturbing scenarios. Suppose, for example, that you buy a pair of red shoes at a chain store in New York City, and the shoes have an embedded RFID. If you pay with a credit card, the store knows your name, and a good deal more about you from your purchasing history. If you wear those shoes when you walk into a branch store in Los Angeles a month later, and that branch has an RFID reader under the rug at the entrance, the clerk could greet you by name. She might offer you a scarf to match the shoes—or to match anything else you bought recently from any other branch of the store. On the other hand, the store might know that you have a habit of returning almost everything you buy—in that case, you might find yourself having trouble finding anyone to wait on you!
The technology is there to do it. We know of no store that has gone quite this far, but in September 2007, the Galeria Kaufhof in Essen, Germany equipped the dressing rooms in the men’s clothing department with RFID readers. When a customer tries on garments, a screen informs him of avail- able sizes and colors. The system may be improved to offer suggestions about accessories. The store keeps track of what items are tried on together and what combinations turn into purchases. The store will remove the RFID tags from the clothes after they are purchased—if the customer asks; otherwise, they remain unobtrusively and could be scanned if the garment is returned to the store. Creative retailers everywhere dream of such ways to use devices to make money, to save money, and to give them small advantages over their competitors. Though Galeria Kaufhof is open about its high-tech men’s department, the fear that customers won’t like their clever ideas sometimes holds back retailers—and sometimes simply causes them to keep quiet about what they are doing.
Black Boxes Are Not Just for Airplanes AnymoreOn April 12, 2007, John Corzine, Governor of New Jersey, was heading back to the governor’s mansion in Princeton to mediate a discussion between Don Imus, the controversial radio personality, and the Rutgers University women’s basketball team.
His driver, 34-year-old state trooper Robert Rasinski, headed north on the Garden State Parkway. He swerved to avoid another car and flipped the Governor’s Chevy Suburban. Governor Corzine had not fastened his seatbelt, and broke 12 ribs, a femur, his collarbone, and his sternum. The details of exactly what happened were unclear. When questioned, Trooper Rasinski said he was not sure how fast they were going—but we do know. He was going 91 in a 65 mile per hour zone. There were no police with radar guns around; no human being tracked his speed. We know his exact speed at the moment of impact because his car, like 30 million cars in America, had a black box—an “event data recorder” (EDR) that captured every detail about what was going on just before the crash. An EDR is an automotive “black box” like the ones recovered from airplane crashes. EDRs started appearing in cars around 1995. By federal law, they will be mandatory in the United States beginning in 2011. If you are driving a new GM, Ford, Isuzu, Mazda, Mitsubishi, or Subaru, your car has one—whether anyone told you that or not. So do about half of new Toyotas. Your insurance company is probably entitled to its data if you have an accident. Yet most people do not realize that they exist.
EDRs capture information about speed, braking time, turn signal status, seat belts: things needed for accident reconstruction, to establish responsibil- ity, or to prove innocence. CSX Railroad was exonerated of all liability in the death of the occupants of a car when its EDR showed that the car was stopped on the train tracks when it was hit. Police generally obtain search warrants before downloading EDR data, but not always; in some cases, they do not have to. When Robert Christmann struck and killed a pedestrian on October 18, 2003, Trooper Robert Frost of the New York State Police downloaded data from the car at the accident scene. The EDR revealed that Christmann had been going 38 MPH in an area where the speed limit was 30. When the data was introduced at trial, Christmann claimed that the state had violated his Fourth Amendment rights against unreasonable searches and seizures, because it had not asked his permission or obtained a search warrant before retrieving the data. That was not necessary, ruled a New York court. Taking bits from the car was not like taking something out of a house, and no search warrant was necessary.
Bits mediate our daily lives. It is almost as hard to avoid leaving digital footprints as it is to avoid touching the ground when we walk. Yet even if we live our lives without walking, we would unsuspectingly be leaving fingerprints anyway.
It is almost as hard to avoid leaving digital footprints as it is to avoid touching the ground when we walk.
Some of the intrusions into our privacy come because of the unexpected, unseen side effects of things we do quite voluntarily. We painted the hypothetical picture of the shopper with the RFID- tagged shoes, who is either welcomed or shunned on her subsequent visits to the store, depending on her shopping history. Similar surprises can lurk almost anywhere that bits are exchanged. That is, for practical purposes, pretty much everywhere in daily life.
Tracing PaperIf I send an email or download a web page, it should come as no surprise that I’ve left some digital footprints. After all, the bits have to get to me, so some part of the system knows where I am. In the old days, if I wanted to be anony- mous, I could write a note, but my handwriting might be recognizable, and I might leave fingerprints (the oily kind) on the paper. I might have typed, but Perry Mason regularly solved crimes by matching a typewritten note with the unique signature of the suspect’s typewriter. More fingerprints.
So, today I would laserprint the letter and wear gloves. But even that may not suffice to disguise me. Researchers at Purdue have developed techniques for matching laser-printed output to a particular printer. They analyze printed sheets and detect unique characteristics of each manufacturer and each individual printer—fingerprints that can be used, like the smudges of old type- writer hammers, to match output with source. It may be unnecessary to put the microscope on individual letters to identify what printer produced a page. The Electronic Frontier Foundation has demonstrated that many color printers secretly encode the printer serial number, date, and time on every page that they print (see Figure 2.2). Therefore, when you print a report, you should not assume that no one can tell who printed it.
Source: Laser fingerprint. Electronic Frontier Foundation. http://w.2.eff.org/Privacy/printers/ docucolor/.
FIGURE 2.2 Fingerprint left by a Xerox DocuColor 12 color laser printer. The dots are very hard to see with the naked eye; the photograph was taken under blue light. The dot pattern encodes the date (2005-05-21), time (12:50), and the serial number of the printer (21052857).
There was a sensible rationale behind this technology. The government wanted to make sure that office printers could not be used to turn out sets of hundred dollar bills. The technology that was intended to frustrate counterfeiters makes it possible to trace every page printed on color laser printers back to the source. Useful technologies often have unintended consequences.
Many people, for perfectly legal and valid reasons, would like to protect their anonymity. They may be whistle blowers or dissidents. Perhaps they are merely railing against injustice in their workplace. Will technologies that undermine anonymity in political discourse also stifle free expression? A measure of anonymity is essential in a healthy democracy—and in the U.S., has been a weapon used to advance free speech since the time of the Revolution. We may regret a complete abandonment of anonymity in favor of communication technologies that leave fingerprints. The problem is not just the existence of fingerprints, but that no one told us that we are creating them.
The problem is not just the existence of fingerprints, but that no one told us that we are creating them.
The Parking Garage Knows More Than You ThinkOne day in the spring of 2006, Anthony and his wife drove to Logan Airport to pick up some friends. They took two cars, which they parked in the garage. Later in the evening, they paid at the kiosk inside the terminal, and left or tried to. One car got out of the garage without a problem, but Anthony’s was held up for more than an hour, in the middle of the night, and was not allowed to leave. Why? Because his ticket did not match his license plate.
It turns out that every car entering the airport garage has its license plate photographed at the same time as the ticket is being taken. Anthony had held both tickets while he and his wife were waiting for their friends, and then he gave her back one—the “wrong” one, as it turned out. It was the one he had taken when he drove in. When he tried to leave, he had the ticket that matched his wife’s license plate number. A no-no.
Who knew that if two cars arrive and try to leave at the same time, they may not be able to exit if the tickets are swapped? In fact, who knew that every license plate is photographed as it enters the garage?
There is a perfectly sensible explanation. People with big parking bills sometimes try to duck them by picking up a second ticket at the end of their trip. When they drive out, they try to turn in the one for which they would have to pay only a small fee. Auto thieves sometimes try the same trick. So the system makes sense, but it raises many questions. Who else gets access to the license plate numbers? If the police are looking for a particular car, can they search the scanned license plate numbers of the cars in the garage? How long is the data retained? Does it say anywhere, even in the fine print, that your visit to the garage is not at all anonymous?
All in Your PocketThe number of new data sources—and the proliferation and interconnection of old data sources—is part of the story of how the digital explosion shattered privacy. But the other part of the technology story is about how all that data is put together.
On October 18, 2007, a junior staff member at the British national tax agency sent a small package to the government’s auditing agency via TNT, a private delivery service. Three weeks later, it had not arrived at its destination and was reported missing. Because the sender had not used TNT’s “registered mail” option, it couldn’t be traced, and as of this writing has not been found. Perhaps it was discarded by mistake and never made it out of the mail- room; perhaps it is in the hands of criminals.
The mishap rocked the nation. As a result of the data loss, every bank and millions of individuals checked account activity for signs of fraud or identity theft. On November 20, the head of the tax agency resigned. Prime Minister Gordon Brown apologized to the nation, and the opposition party accused the Brown administration of having “failed in its first duty to protect the public.”
The package contained two computer disks. The data on the disks included names, addresses, birth dates, national insurance numbers (the British equiv- alent of U.S. Social Security Numbers), and bank account numbers of 25 mil- lion people—nearly 40% of the British population, and almost every child in the land. The tax office had all this data because every British child receives weekly government payments, and most families have the money deposited directly into bank accounts. Ten years ago, that much data would have required a truck to transport, not two small disks. Fifty years ago, it would have filled a building.
This was a preventable catastrophe. Many mistakes were made; quite ordinary mistakes. The package should have been registered. The disks should have been encrypted. It should not have taken three weeks for someone to speak up. But those are all age-old mistakes. Offices have been sending pack- ages for centuries, and even Julius Caesar knew enough to encrypt information if he had to use intermediaries to deliver it. What happened in 2007 that could not have happened in 1984 was the assembly of such a massive data- base in a form that allowed it to be easily searched, processed, analyzed, connected to other databases, transported and “lost.”
Exponential growth in storage size, processing speed, and communication speed—have changed the same old thing into something new. Blundering, stupidity, curiosity, malice, and thievery are not new. The fact that sensitive data about everyone in a nation could fit on a laptop is new. The ability to search for a needle in the haystack of the Internet is new. Easily connecting “public” data sources that used to be stored in file drawers in Albuquerque and Atlanta, but are now both electronically accessible from Algeria—that is new too.
Training, laws, and software all can help. But the truth of the matter is that as a society, we don’t really know how to deal with these consequences of the digital explosion. The technology revolution is outstripping society’s capacity to adjust to the changes in what can be taken for granted. The Prime Minister had to apologize to the British nation because among the things that have been blown to bits is the presumption that no junior staffer could do that much damage by mailing a small parcel.
Connecting the DotsThe way we leave fingerprints and footprints is only part of what is new. We have always left a trail of information behind us, in our tax records, hotel reservations, and long distance telephone bills. True, the footprints are far clearer and more complete today than ever before. But something else has changed—the harnessing of computing power to correlate data, to connect the dots, to put pieces together, and to create cohesive, detailed pictures from what would otherwise have been meaningless fragments. The digital explosion does not just blow things apart. Like the explosion at the core of an atomic bomb, it blows things together as well. Gather up the details, connect the dots, assemble the parts of the puzzle, and a clear picture will emerge.
Computers can sort through databases too massive and too boring to be examined with human eyes. They can assemble colorful pointillist paintings out of millions of tiny dots, when any few dots would reveal nothing. When a federal court released half a million Enron emails obtained during the cor- ruption trial, computer scientists quickly identified the subcommunities, and perhaps conspiracies, among Enron employees, using no data other than the pattern of who was emailing whom (see Figure 2.3). The same kinds of clustering algorithms work on patterns of telephone calls. You can learn a lot by knowing who is calling or emailing whom, even if you don’t know what they are saying to each other—especially if you know the time of the communica- tions and can correlate them with the time of other events.
Sometimes even public information is revealing. In Massachusetts, the Group Insurance Commission (GIC) is responsible for purchasing health insurance for state employees. When the premiums it was paying jumped one year, the GIC asked for detailed information on every patient encounter. And for good reason: All kinds of health care costs had been growing at prodigious rates. In the public interest, the state had a responsibility to understand how it was spending taxpayer money. The GIC did not want to know patients’ names; it did not want to track individuals, and it did not want people to think they were being tracked. Indeed, tracking the medical visits of individuals would have been illegal.
Source: Enron, Jeffrey Heer. Figure 3 from http://jheer.org/enron/v1/.
FIGURE 2.3 Diagram showing clusters of Enron emailers, indicating which employees carried on heavy correspondence with which others. The evident “blobs” may be the outlines of conspiratorial cliques.
So, the GIC data had no names, no addresses, no Social Security Numbers, no telephone numbers—nothing that would be a “unique identifier” enabling a mischievous junior staffer in the GIC office to see who exactly had a particular ailment or complaint. To use the official lingo, the data was “de-identified”; that is, stripped of identifying information. The data did include the gender, birth date, zip code, and similar facts about individuals making medical claims, along with some information about why they had sought medical attention. That information was gathered not to challenge any particular person, but to learn about patterns—if the truckers in Worcester are having lots of back injuries, for example, maybe workers in that region need better training on how to lift heavy items. Most states do pretty much the same kind of analysis of de-identified data about state workers.
Now this was a valuable data set not just for the Insurance Commission, but for others studying public health and the medical industry in Massachusetts. Academic researchers, for example, could use such a large inventory of medical data for epidemiological studies. Because it was all de-identified, there was no harm in letting others see it, the GIC figured. In fact, it was such good data that private industry—for example, businesses in the health management sec- tor—might pay money for it. And so the GIC sold the data to businesses. The taxpayers might even benefit doubly from this decision: The data sale would provide a new revenue source to the state, and in the long run, a more informed health care industry might run more efficiently. But how de-identified really was the material?
Latanya Sweeney was at the time a researcher at MIT (she went on to become a computer science professor at Carnegie Mellon University). She wondered how hard it would be for those who had received the deidentified data to “re-identify” the records and learn the medical problems of a particular state employee—for example, the governor of the Commonwealth.
Governor Weld lived, at that time, in Cambridge, Massachusetts. Cambridge, like many municipalities, makes its voter lists publicly available, for a charge of $15, and free for candidates and political organizations. If you know the precinct, they are available for only $.75. Sweeney spent a few dollars and got the voter lists for Cambridge. Anyone could have done the same. According to the Cambridge voter registration list, there were only six people in Cambridge with Governor Weld’s birth date, only three of those were men, and only one of those lived in Governor Weld’s five-digit zip code. Sweeney could use that combination of factors, birth date, gender, and zip code to recover the Governor’s medical records—and also those for members of his family, since the data was organized by employee. This type of re-identification is straightforward. In Cambridge, in fact, birth date alone was sufficient to identify more than 10% of the population. Nationally, gender, zip code, and date of birth are all it takes to identify 87% of the U.S. population uniquely.
The data set contained far more than gender, zip code, and birth date. In fact, any of the 58 individuals who received the data in 1997 could have identified any of the 135,000 people in the database. “There is no patient con- fidentiality,” said Dr. Joseph Heyman, president of the Massachusetts Medical Society. “It’s gone.” It is easy to read a story like this and scream, “Heads should roll!.” But it is actually quite hard to figure out who, if anyone, made a mistake. Certainly collecting the information was the right thing to do, given that health costs are a major expense for all businesses and institutions. The GIC made an hon- est effort to deidentify the data before releasing it. Arguably the GIC might not have released the data to other state agencies, but that would be like saying that every department of government should acquire its heating oil independently. Data is a valuable resource, and once someone has collected it, the government is entirely correct in wanting it used for the public good. Some might object to selling
It is easy to read a story like this and scream, “Heads should roll!.” But it is actually quite hard to figure out who, if anyone, made a mistake.
the data to an outside business, but only in retrospect; had the data really been better de-identified, whoever made the decision to sell the data might well have been rewarded for helping to hold down the cost of government.
Perhaps the mistake was the ease with which voter lists can be obtained. However, it is a tradition deeply engrained in our system of open elections that the public may know who is eligible to vote, and indeed who has voted. And voter lists are only one source of public data about the U.S. population. How many 21-year-old male Native Hawaiians live in Middlesex County, Massachusetts? In the year 2000, there were four. Anyone can browse the U.S. Census data, and sometimes it can help fill in pieces of a personal picture: Just go to factfinder.census.gov.
The mistake was thinking that the GIC data was truly deidentified, when it was not. But with so many data sources available, and so much computing power that could be put to work connecting the dots, it is very hard to know just how much information has to be discarded from a database to make it truly anonymous. Aggregating data into larger units certainly helps releasing data by five-digit zip codes reveals less than releasing it by nine-digit zip codes. But the coarser the data, the less it reveals also of the valuable information for which it was made available. How can we solve a problem that results from many developments, no one of which is really a problem in itself?
Why We Lost Our Privacy, or Gave It AwayInformation technology did not cause the end of privacy, any more than automotive technology caused teen sex. Technology creates opportunities and risks, and people, as individuals and as societies, decide how to live in the changed landscape of new possibilities. To understand why we have less pri- vacy today than in the past, we must look not just at the gadgets. To be sure, we should be wary of spies and thieves, but we should also look at those who protect us and help us and we should also take a good look in the mirror.
We are most conscious of our personal information winding up in the hands of strangers when we think about data loss or theft. Reports like the one about the British tax office have become fairly common. The theft of information about 45 million customers of TJX stores, described in Chapter 5, “Secret Bits,” was even larger than the British catastrophe. In 2003, Scott Levine, owner of a mass email business named Snipermail, stole more than a billion personal information records from Acxiom. Millions of Americans are victimized by identity theft every year, at a total cost in the tens of billions of dollars annually. Many more of us harbor daily fears that just “a little bit” of our financial information has leaked out, and could be a personal time bomb if it falls into the wrong hands.
Why can’t we just keep our personal information to ourselves? Why do so many other people have it in the first place, so that there is an opportunity for it to go astray, and an incentive for creative crooks to try to steal it?
We lose control of our personal information because of things we do to ourselves, and things others do to us. Of things we do to be ahead of the curve, and things we do because everyone else is doing them. Of things we do to save money, and things we do to save time. Of things we do to be safe from our enemies, and things we do because we feel invulnerable. Our loss of privacy is a problem, but there is no one answer to it, because there is no one reason why it is happening. It is a messy problem, and we first have to think about it one piece at a time. We give away information about ourselves voluntarily leave visible foot- prints of our daily lives—because we judge, perhaps without thinking about it very much, that the benefits outweigh the costs. To be sure, the benefits are many.
Saving TimeFor commuters who use toll roads or bridges, the risk-reward calculation is not even close. Time is money, and time spent waiting in a car is also anxiety and frustration. If there is an option to get a toll booth transponder, many com- muters will get one, even if the device costs a few dollars up front. Cruising past the cars waiting to pay with dollar bills is not just a relief; it actually brings the driver a certain satisfied glow. The transponder, which the driver attaches to the windshield from inside the car, is an RFID, powered with a battery so identifying information can be sent to the sensor several feet away as the driver whizzes past. The sensor can be mounted in a constricted travel lane, where a toll booth for a human toll- taker might have been. Or it can be mounted on a boom above traffic, so the driver doesn’t even need to change lanes or slow down. And what is the possible harm? Of course, the state is recording the fact that the car has passed the sensor; that is how the proper account balance can be debited to pay the toll. When the balance gets too low, the driver’s credit card may get billed automatically to replenish the balance. All that only makes the system better—no fumbling for change or doing anything else to pay for your travels.
The monthly bill—for the Massachusetts Fast Lane, for example—shows where and when you got on the highway—when, accurate to the second. It also shows where you got off and how far you went. Informing you of the mileage is another useful service, because Massachusetts drivers can get a refund on certain fuel taxes, if the fuel was used on the state toll road. Of course, you do not need a PhD to figure out that the state also knows when you got off the road, to the second, and that with one subtraction and one division, its computers could figure out if you were speeding. Technically, in fact, it would be trivial for the state to print the appropriate speeding fine at the bottom of the statement, and to bill your credit card for that amount at the same time as it was charging for tolls. That would be taking convenience a bit too far, and no state does it, yet.
What does happen right now, however, is that toll transponder records are introduced into divorce and child custody cases. You’ve never been within five miles of that lady’s house? Really? Why have you gotten off the high- way at the exit near it so many times? You say you can be the better custodial parent for your children, but the facts suggest otherwise. As one lawyer put it, “When a guy says, ‘Oh, I’m home every day at five and I have dinner with my kids every single night,’ you subpoena his E-ZPass and you find out he’s crossing that bridge every night at 8:30. Oops!” These records can be subpoenaed, and have been, hundreds of times, in family law cases. They have also been used in employment cases, to prove that the car of a worker who said he was working was actually far from the workplace. But most of us aren’t planning to cheat on our spouses or our bosses, so the loss of privacy seems like no loss at all, at least compared to the time saved. Of course, if we actually were cheating, we would be in a big hurry, and might take some risks to save a few minutes!
Saving MoneySometimes it’s money, not time, which motivates us to leave footprints. Such is the case with supermarket loyalty cards. If you do not want Safeway to keep track of the fact that you bought the 12-pack of Yodels despite your recent cholesterol results, you can make sure it doesn’t know. You simply pay the “privacy tax”—the surcharge for customers not presenting a loyalty card. The purpose of loyalty cards is to enable merchants to track individual item purchases. (Item-level transactions are typically not tracked by credit card companies, which do not care if you bought Yodels instead of granola, so long as you pay the bill.) With loyalty cards, stores can capture details of cash transactions as well. They can process all the transaction data, and draw inferences about shoppers’ habits. Then, if a lot of people who buy Yodels also buy Bison Brew Beer, the store’s automated cash register can automatically spit out a discount coupon for Bison Brew as your Yodels are being bagged. A “discount” for you, and more sales for Safeway. Everybody wins. Don’t they?
As grocery stores expand their web-based business, it is even easier for them to collect personal information about you. Reading the fine print when you sign up is a nuisance, but it is worth doing, so you understand what you are giving and what you are getting in return. Here are a few sentences of Safeway’s privacy policy for customers who use its web site:
Safeway may use personal information to provide you with news- letters, articles, product or service alerts, new product or service announcements, saving awards, event invitations, personally tailored coupons, program and promotional information and offers, and other information, which may be provided to Safeway by other companies.… We may provide personal information to our partners and suppliers for customer support services and processing of personal information on behalf of Safeway. We may also share personal information with our affiliate companies, or in the course of an actual or potential sale, re-organization, consolidation, merger, or amalgamation of our business or businesses.
Dreary reading, but the language gives Safeway lots of leeway. Maybe you don’t care about getting the junk mail. Not everyone thinks it is junk, and the company does let you “opt out” of receiving it (although in general, few people bother to exercise opt-out rights). But Safeway has lots of “affiliates,” and who knows how many companies with which it might be involved in a merger or sale of part of its business. Despite privacy concerns voiced by groups like C.A.S.P.I.A.N. (Consumers Against Supermarket Privacy Invasion and Numbering, www.nocards.org), most shoppers readily agree to have the data collected. The financial incentives are too hard to resist, and most consumers just don’t worry about marketers knowing their purchases. But when- ever purchases can be linked to your name, there is a record, somewhere in a huge database, of whether you use regular or super tampons, lubricated or unlubricated condoms, and whether you like regular beer or lite. You have authorized the company to share it, and even if you hadn’t, the company could lose it accidentally, have it stolen, or have it subpoenaed.
Convenience of the CustomerThe most obvious reason not to worry about giving information to a company is that you do business with them, and it is in your interest to see that they do their business with you better. You have no interest in whether they make more money from you, but you do have a strong interest in making it easier and faster for you to shop with them, and in cutting down the amount of stuff they may try to sell you that you would have no interest in buying. So your interests and theirs are, to a degree, aligned, not in opposition. Safeway’s privacy policy states this explicitly: “Safeway Club Card information and other information may be used to help make Safeway’s products, services, and programs more useful to its customers.” Fair enough.
No company has been more progressive in trying to sell customers what they might want than the online store Amazon. Amazon suggests products to repeat customers, based on what they have bought before or what they have simply looked at during previous visits to Amazon’s web site. The algorithms are not perfect; Amazon’s computers are drawing inferences from data, not being clairvoyant. But Amazon’s guesses are pretty good, and recommending the wrong book every now and then is a very low-cost mistake. If Amazon does it too often, I might switch to Barnes and Noble, but there is no injury to me. So again: Why should anyone care that Amazon knows so much about me? On the surface, it seems benign. Of course, we don’t want the credit card information to go astray, but who cares about knowing what books I have looked at online?
Our indifference is another marker of the fact that we are living in an exposed world, and that it feels very different to live here. In 1988, when a
HOW SITES KNOW WHO YOU ARE
1. You tell them. Log in to Gmail, Amazon, or eBay, and you are letting them know exactly who you are.
2. They’ve left cookies on one of your previous visits. A cookie is a small text file stored on your local hard drive that contains information that a particular web site wants to have available during your current session (like your shopping cart), or from one session to the next. Cookies give sites persistent information for tracking and personalization. Your browser has a command for showing cookies—you may be surprised how many web sites have left them!
3. They have your IP address. The web server has to know where you are so that it can ship its web pages to you. Your IP address is a number like 66.82.9.88 that locates your computer in the Internet (see the Appendix for details). That address may change from one day to the next. But in a residential setting, your Internet Service Provider (your ISP—typically your phone or cable company) knows who was assigned each IP address at any time. Those records are often subpoenaed in court cases.
If you are curious about who is using a particular IP address, you can check the American Registry of Internet Numbers (www.arin.net). Services such as whatismyip.com, whatismyip.org, and ipchicken.com also allow you to check your own IP address. And www.whois.net allows you to check who owns a domain name such as harvard.com—which turns out to be the Harvard Bookstore, a privately owned bookstore right across the street from the university. Unfortunately, that information won’t reveal who is sending you spam, since spammers routinely forge the source of email they send you.
videotape rental store clerk turned over Robert Bork’s movie rental records to a Washington, DC newspaper during Bork’s Supreme Court confirmation hearings, Congress was so outraged that it quickly passed a tough privacy protection bill, The Video Privacy Protection Act. Videotape stores, if any still exist, can be fined simply for keeping rental records too long. Twenty years later, few seem to care much what Amazon does with its millions upon millions of detailed, fine-grained views into the brains of all its customers.
It’s Just Fun to Be ExposedSometimes, there can be no explanation for our willing surrender of our privacy except that we take joy in the very act of exposing ourselves to public view. Exhibitionism is not a new phenomenon. Its practice today, as in the past, tends to be in the province of the young and the drunk, and those wish- ing to pretend they are one or the other. That correlation is by no means perfect, however. A university president had to apologize when an image of her threatening a Hispanic male with a stick leaked out from her MySpace page, with a caption indicating that she had to “beat off the Mexicans because they were constantly flirting with my daughter.”
And there is a continuum of outrageousness. The less wild of the party photo postings blend seamlessly with the more personal of the blogs, where the bloggers are chatting mostly about their personal feelings. Here there is not exuberance, but some simpler urge
Bits don’t fade and they don’t yellow. Bits are forever. And we don’t know how to live with that.
for human connectedness. That passion, too, is not new. What is new is that a photo or video or diary entry, once posted, is visible to the entire world, and that there is no taking it back. Bits don’t fade and they don’t yellow. Bits are forever. And we don’t know how to live with that.
For example, a blog selected with no great design begins:
This is the personal web site of Sarah McAuley. … I think sharing my life with strangers is odd and narcissistic, which of course is why I’m addicted to it and have been doing it for several years now. Need more? You can read the “About Me” section, drop me an email, or you know, just read the drivel that I pour out on an almost-daily basis.
No thank you, but be our guest. Or consider that there is a Facebook group just for women who want to upload pictures of themselves uncontrollably drunk. Or the Jennicam, through which Jennifer Kay Ringley opened her life to the world for seven years, setting a standard for exposure that many since have surpassed in explicitness, but few have approached in its endless ordinariness. We are still experimenting, both the voyeurs and viewed.
Because You Can’t Live Any Other WayFinally, we give up data about ourselves because we don’t have the time, patience, or single-mindedness about privacy that would be required to live our daily lives in another way. In the U.S., the number of credit, debit, and bank cards is in the billions. Every time one is used, an electronic handshake records a few bits of information about who is using it, when, where, and for what. It is now virtually unheard of for people to make large purchases of ordinary consumer goods with cash. Personal checks are going the way of cassette tape drives, rendered irrelevant by newer technologies. Even if you could pay cash for everything you buy, the tax authorities would have you in their databases anyway. There even have been proposals to put RFIDs in currency notes, so that the movement of cash could be tracked.
Only sects such as the Amish still live without electricity. It will soon be almost that unusual to live without Internet connectivity, with all the finger- prints it leaves of your daily searches and logins and downloads. Even the old dumb TV is rapidly disappearing in favor of digital communications. Digital TV will bring the advantages of video on demand—no more trips to rent movies or waits for them to arrive in the mail—at a price: Your television service provider will record what movies you have ordered. It will be so attractive to be able to watch what we want when we want to watch it, that we won’t miss either the inconvenience or the anonymity of the days when all the TV stations washed your house with their airwaves. You couldn’t pick the broadcast times, but at least no one knew which waves you were grabbing out of the air.
Little Brother Is WatchingSo far, we have discussed losses of privacy due to things for which we could, in principle anyway, blame ourselves. None of us really needs a loyalty card, we should always read the fine print when we rent a car, and so on. We would all be better off saying “no” a little more often to these privacy-busters, but few of us would choose to live the life of constant vigilance that such res- olute denial would entail. And even if we were willing to make those sacri- fices, there are plenty of other privacy problems caused by things others do to us.
The snoopy neighbor is a classic American stock figure the busybody who watches how many liquor bottles are in your trash, or tries to figure out whose Mercedes is regularly parked in your driveway, or always seems to know whose children were disorderly last Saturday night. But in Cyberspace, we are all neighbors. We can all check up on each other, without even opening the curtains a crack.
Public Documents Become VERY PublicSome of the snooping is simply what anyone could have done in the past by paying a visit to the Town Hall. Details that were always public—but inacces- sible—are quite accessible now. In 1975, Congress created the Federal Election Commission to administer the Federal Election Campaign Act. Since then, all political contributions have been public information. There is a difference, though, between “public” and “readily accessible.” Making public data available on the Web shattered the veil of privacy that came from inaccessibility. Want to know who gave money to Al Franken for Senate? Lorne Michaels from Saturday Night Live, Leonard Nimoy, Paul Newman, Craig Newmark (the “craig” of craigslist.com), and Ginnie W., who works with us and may not have wanted us to know her political leanings. Paul B., and Henry G., friends of ours, covered their bases by giving to both Obama and Clinton.
The point of the law was to make it easy to look up big donors. But since data is data, what about checking on your next-door neighbors? Ours defi- nitely leaned toward Obama over Clinton, with no one in the Huckabee camp. Or your clients? One of ours gave heartily to Dennis Kucinich. Or your daugh- ter’s boyfriend? You can find out for yourself, at www.fec.gov or fundrace.huffingtonpost.com. We’re not telling about our own.
Hosts of other facts are now available for armchair browsing facts that in the past were nominally public but required a trip to the Registrar of Deeds. If you want to know what you neighbor paid for their house, or what it’s worth today, many communities put all of their real estate tax rolls online. It was always public; now it’s accessible. It was never wrong that people could get this information, but it feels very different now that people can browse through it from the privacy of their home. If you are curious about someone, you can try to find him or her on Facebook, MySpace, or just using an ordinary search engine. A college would not peek at the stupid Facebook page of an applicant, would it? Absolutely not, says the Brown Dean of Admissions, “unless someone says there’s some- thing we should look at.”
New participatory websites create even bigger opportunities for information-sharing. If you are about to go on a blind date, there are special sites just for that. Take a look at www.dontdatehimgirl.com, a social networking site with a self-explanatory focus. When we checked, this warning about one man had just been posted, along with his name and photograph: “Compulsive womanizer, liar, internet cheater; pathological liar who can’t be trusted as a friend much less a boyfriend. Total creep! Twisted and sick—needs mental help. Keep your daughter away from this guy!” Of course, such information may be worth exactly what we paid for it. There is a similar site, www.platewire.com, for reports about bad drivers. If you are not dating or driving, perhaps you’d like to check out a neighborhood before you move in, or just register a public warning about the obnoxious revelers who live next door to you. If so, www.rottenneighbor.com is the site for you. When we typed in the zip code in which one of us lives, a nice Google map appeared with a house near ours marked in red. When we clicked on it, we got this report on our neighbor: you’re a pretty blonde, slim and gorgeous. hey, i’d come on to you if i weren’t gay. you probably have the world handed to you like most pretty women. is that why you think that you are too good to pick up after your dog? you know that you are breaking the law as well as being disrespectful of your neighbors. well, i hope that you step in your own dogs poop on your way to work, or on your way to dinner. i hope that the smell of your self importance follows you all day.
For a little money, you can get a lot more information. In January 2006, John Aravosis, creator of Americablog.com, purchased the detailed cell phone records of General Wesley Clark. For $89.95, he received a listing of all of Clark’s calls for a three-day period. There are dozens of online sources for this kind of information. You might think you’d have to be in the police or the FBI to find out who people are calling on their cell phones, but there are handy services that promise to provide anyone with that kind of information for a modest fee. The Chicago Sun Times decided to put those claims to a test, so it paid $110 to locatecell.com and asked for a month’s worth of cell phone records of one Frank Main, who happened to be one of its own reporters. The Sun Times did it all with a few keystrokes—provided the telephone number, the dates, and a credit card number. The request went in on Friday of a long weekend, and on Tuesday morning, a list came back in an email. The list included 78 telephone numbers the reporter had called— sources in law enforcement, people he was writing stories about, and editors in the newspaper. It was a great service for law enforcement—except that criminals can use it too, to find out whom the detectives are calling. These incidents stimulated passage of the Telephone Records and Privacy Act of 2006, but in early 2008, links on locatecell.com were still offering to help “find cell phone records in seconds,” and more.
If cell phone records are not enough information, consider doing a proper background check. For $175, you can sign up as an “employer” with ChoicePoint and gain access to reporting services including criminal records, credit history, motor vehicle records, educational verification, employment verification, Interpol, sexual offender registries, and warrants searchers—they are all there to be ordered, with a la carte pricing. Before we moved from paper to bits, this information was publicly available, but largely inaccessible. Now, all it takes is an Internet connection and a credit card. This is one of the most important privacy transformations. Information that was previously available only to professionals with specialized access or a legion of local workers is now available to everyone.
Then there is real spying. Beverly O’Brien suspected her husband was having an affair. If not a physical one, at a minimum she thought he was engaging in inappropriate behavior online. So, she installed some monitoring software. Not hard to do on the family computer, these packages are promoted as “parental control software” a tool to monitor your child’s activities, along with such other uses as employee monitoring, law enforcement, and to “catch a cheating spouse.” Beverly installed the software, and discovered that her hapless hubby, Kevin, was chatting away while playing Yahoo! Dominoes. She was an instant spy, a domestic wire-tapper. The marketing materials for her software neglected to tell her that installing spyware that intercepts communications traffic was a direct violation of Florida’s Security of Communications Act, and the trial court refused to admit any of the evidence in their divorce proceeding. The legal system worked, but that didn’t change the fact that spying has become a relatively commonplace activity, the domain of spouses and employers, jilted lovers, and business competitors.
Idle CuriosityThere is another form of Little Brother-ism, where amateurs can sit at a computer connected to the Internet and just look for something interesting not about their neighbors or husbands, but about anyone at all. With so much data out there, anyone can discover interesting personal facts, with the investment of a little time and a little imagination. To take a different kind of example, imagine having your family’s medical history re-identified from a paper in an online medical journal.
Figure 2.4 shows a map of the incidence of a disease, let’s say syphilis, in a part of Boston. The “syphilis epidemic” in this illustration is actually a sim- ulation. The data was just made up, but maps exactly like this have been common in journals for decades. Because the area depicted is more than 10 square kilometers, there is no way to figure out which house corresponds to a dot, only which neighborhood.
Source: John S. Brownstein, Christopher A. Cassa, Kenneth D. Mandl, No place to hide—reverse identification of patients from published maps, New England Journal of Medicine, 355:16, October 19, 2007, 1741-1742.
FIGURE 2.4 Map of part of Boston as from a publication in a medical journal, showing where a disease has occurred. (Simulated data.)
At least that was true in the days when journals were only print documents. Now journals are available online, and authors have to submit their figures as high-resolution JPEGs. Figure 2.5 shows what happens if you download the published journal article from the journal’s web site, blow up a small part of the image, and superimpose it on an easily available map of the corresponding city blocks. For each of the seven disease locations, there is only a single house to which it could correspond. Anyone could figure out where the people with syphilis live.
Source: John S. Brownstein, Christopher A. Cassa, Kenneth D. Mandl, No place to hide—reverse identification of patients from published maps, New England Journal of Medicine, 355:16, October 19, 2007, 1741-1742.
FIGURE 2.5 Enlargement of Figure 2.4 superimposed on a housing map of a few blocks of the city, showing that individual households can be identified to online readers, who have access to the high-resolution version of the epidemiology map.
This is a re-identification problem, like the one Latanya Sweeney noted when she showed how to get Governor Weld’s medical records. There are things that can be done to solve this one. Perhaps the journal should not use such high-resolution images (although that could cause a loss of crispness, or even visibility—one of the nice things about online journals is that the visually impaired can magnify them, to produce crisp images at a very large scale). Perhaps the data should be “jittered” or “blurred” so what appears on the screen for illustrative purposes is intentionally incorrect in its fine details. There are always specific policy responses to specific re-identification scenarios. Every scenario is a little different, however, and it is often hard to articulate sensible principles to describe what should be fixed.
In 2001, four MIT students attempted to re-identify Chicago homicide victims for a course project. They had extremely limited resources: no proprietary databases such as the companies that check credit ratings possess, no access to government data, and very limited computing power. Yet they were able to identify nearly 8,000 individuals from a target set of 11,000.
The source of the data was a free download from the Illinois Criminal Justice Authority. The primary reference data source was also free. The Social Security Administration provides a comprehensive death index including name, birth date, Social Security Number, zip code of last residence, date of death, and more. Rather than paying the nominal fee for the data (after all, they were students), these researchers used one of the popular genealogy web sites, RootsWeb.com, as a free source for the Social Security Death Index (SSDI) data. They might also have used municipal birth and death records, which are also publicly available.
The SSDI did not include gender, which was important to completing an accurate match. But more public records came to the rescue. They found a database published by the census bureau that enabled them to infer gender from first names—most people named “Robert” are male, and most named “Susan” are female. That, and some clever data manipulation, was all it took. It is far from clear that it was wrong for any particular part of these data sets to be publicly available, but the combination revealed more than was intended.
The more re-identification problems we see, and the more ad hoc solutions we develop, the more we develop a deep-set fear that our problems may never end. These problems arise because there is a great deal of public data, no one piece of which is problematic, but which creates privacy violations in combination. It is the opposite of what we know about salt—that the component elements, sodium and chlorine, are both toxic, but the compound itself is safe. Here we have toxic compounds arising from the clever combination of harm- less components. What can possibly be done about that?
Big Brother, Abroad and in the U.S.Big Brother really is watching today, and his job has gotten much easier because of the digital explosion. In China, which has a long history of tracking individuals as a mechanism of social control, the millions of residents of Shenzhen are being issued identity cards, which record far more than the bearer’s name and address. According to a report in the New York Times, the cards will document the individual’s work history, educational background, religion, ethnicity, police record, medical insurance status, landlord’s phone number, and reproductive history. Touted as a crime-fighting measure, the new technology—developed by an American company—will come in handy in case of street protests or any individual activity deemed suspicious by the authorities. The sort of record-keeping that used to be the responsibility of local authorities is becoming automated and nationalized as the country prospers and its citizens become increasingly mobile. The technology makes it easier to know where everyone is, and the government is taking advantage of that opportunity. Chinese tracking is far more detailed and pervasive than Britain’s ubiquitous surveillance cameras.
You Pay for the Mike, We’ll Just Listen InPlanting tiny microphones where they might pick up conversations of under- world figures used to be risky work for federal authorities. There are much safer alternatives now that many people carry their own radio-equipped microphones with them all the time.
Many cell phones can be reprogrammed remotely so that the microphone is always on and the phone is transmitting, even if you think you have pow- ered it off. The FBI used this technique in 2004 to listen to John Tomero’s con- versations with other members of his organized crime family. A federal court ruled that this “roving bug,” installed after due authorization, constituted a legal from of wiretapping. Tomero could have prevented it by removing the battery, and now some nervous business executives routinely do exactly that. The microphone in a General Motors car equipped with the OnStar system can also be activated remotely, a feature that can save lives when OnStar operators contact the driver after receiving a crash signal. OnStar warns, “OnStar will cooperate with official court orders regarding criminal investigations from law enforcement and other agencies,” and indeed, the FBI has used this method to eavesdrop on conversations held inside cars. In one case, a federal court ruled against this way of collecting evidence—but not on privacy grounds. The roving bug disabled the normal operation of OnStar, and the court simply thought that the FBI had interfered with the vehicle owner’s contractual right to chat with the OnStar operators!
Identifying Citizens—Without ID CardsIn the age of global terrorism, democratic nations are resorting to digital surveillance to protect themselves, creating hotly contested conflicts with traditions of individual liberty. In the United States, the idea of a national identification card causes a furious libertarian reaction from parties not usu- ally outspoken in defense of individual freedom. Under the REAL ID act of 2005, uniform federal standards are being implemented for state-issued drivers’ licenses. Although it passed through Congress without debate, the law is opposed by at least 18 states. Resistance pushed back the implementation timetable first to 2009, and then, in early 2008, to 2011. Yet even fully implemented, REAL ID would fall far short of the true national ID preferred by those charged with fighting crime and preventing terrorism.
As the national ID card debate continues in the U.S., the FBI is making it irrelevant by exploiting emerging technologies. There would be no need for anyone to carry an ID card if the govern-
As the national ID card debate continues in the U.S., the FBI is making it irrelevant by exploiting emerging technologies.
ment had enough biometric data on Americans—that is, detailed records of their fingerprints, irises, voices, walking gaits, facial features, scars, and the shape of their earlobes. Gather a combination of measurements on individuals walking in public places, consult the databases, connect the dots, and—bingo!—their names pop up on the computer screen. No need for them to carry ID cards; the combination of biometric data would pin them down perfectly.
Well, only imperfectly at this point, but the technology is improving. And the data is already being gathered and deposited in the data vault of the FBI’s Criminal Justice Information Services database in Clarksburg, West Virginia. The database already holds some 55 million sets of fingerprints, and the FBI processes 100,000 requests for matches every day. Any of 900,000 federal, state, and local law enforcement officers can send a set of prints and ask the FBI to identify it. If a match comes up, the individual’s criminal history is there in the database too.
But fingerprint data is hard to gather; mostly it is obtained when people are arrested. The goal of the project is to get identifying information on nearly everyone, and to get it without bothering people too much. For example, a simple notice at airport security could advise travelers that, as they pass through airport security, a detailed “snapshot” will be taken as they enter the secure area. The traveler would then know what is happening, and could have refused (and stayed home). As an electronic identification researcher puts it, “That’s the key. You’ve chosen it. You have chosen to say, ‘Yeah, I want this place to recognize me.’” No REAL ID controversies, goes the theory; all the data being gathered would, in some sense at least, be offered voluntarily.
Friendly Cooperation Between Big SiblingsIn fact, there are two Big Brothers, who often work together. And we are, by and large, glad they are watching, if we are aware of it at all. Only occasion- ally are we alarmed about their partnership.
The first Big Brother is Orwell’s—the government. And the other Big Brother is the industry about which most of us know very little: the business of aggregating, consolidating, analyzing, and reporting on the billions of individual transactions, financial and otherwise, that take place electronically every day. Of course, the commercial data aggregation companies are not in the spying business; none of their data reaches them illicitly. But they do know a lot about us, and what they know can be extremely valuable, both to businesses and to the government.
The new threat to privacy is that computers can extract significant information from billions of apparently uninteresting pieces of data, in the way that mining technology has made it economically feasible to extract precious metals from low-grade ore. Computers can correlate databases on a massive level, linking governmental data sources together with private and commercial ones, creating comprehensive digital dossiers on millions of people. With their massive data storage and processing power, they can make connections in the data, like the clever connections the MIT students made with the Chicago homicide data, but using brute force rather than ingenuity. And the computers can discern even very faint traces in the data—traces that may help track payments to terrorists, set our insurance rates, or simply help us be sure that our new babysitter is not a sex offender. And so we turn to the story of the government and the aggregators.
Acxiom is the country’s biggest customer data company. Its business is to aggregate transaction data from all those swipes of cards in card readers all over the world—in 2004, this amounted to more than a billion transactions a day. The company uses its massive data about financial activity to support the credit card industry, banks, insurers, and other consumers of information about how people spend money. Unsurprisingly, after the War on Terror began, the Pentagon also got interested in Acxiom’s data and the ways they gather and analyze it. Tracking how money gets to terrorists might help find the terrorists and prevent some of their attacks.
ChoicePoint is the other major U.S. data aggregator. ChoicePoint has more than 100,000 clients, which call on it for help in screening employment candidates, for example, or determining whether individuals are good insurance risks. Acxiom and ChoicePoint are different from older data analysis operations, simply because of the scale of their operations. Quantitative differences have qualitative effects, as we said in Chapter 1; what has changed is not the technology, but rather the existence of rich data sources. Thirty years ago, credit cards had no magnetic stripes. Charging a purchase was a mechanical operation; the raised numerals on the card made an impression through carbon paper so you could have a receipt, while the top copy went to the company that issued the card. Today, if you charge something using your CapitalOne card, the bits go instantly not only to CapitalOne, but to Acxiom or other aggregators. The ability to search through huge commercial data sources— including not just credit card transaction data, but phone call records, travel tickets, and banking transactions, for example—is another illustration that more of the same can create something new.
Privacy laws do exist, of course. For a bank, or a data aggregator, to post your financial data on its web site would be illegal. Yet privacy is still devel- oping as an area of the law, and it is connected to commercial and govern- ment interests in uncertain and surprising ways.
A critical development in privacy law was precipitated by the presidency of Richard Nixon. In what is generally agreed to be an egregious abuse of presidential power, Nixon used his authority as president to gather informa- tion on those who opposed him—in the words of his White House Counsel at the time, to “use the available federal machinery to screw our political ene- mies.” Among the tactics Nixon used was to have the Internal Revenue Service audit the tax returns of individuals on an “enemies list,” which included congressmen, journalists, and major contributors to Democratic causes. Outrageous as it was to use the IRS for this purpose, it was not ille- gal, so Congress moved to ban it in the future.
The Privacy Act of 1974 established broad guidelines for when and how the Federal Government can assemble dossiers on citizens it is not investigat- ing for crimes. The government has to give public notice about what infor- mation it wants to collect and why, and it has to use it only for those reasons. The Privacy Act limits what the government can do to gather information about individuals and what it can do with records it holds. Specifically, it states, “No agency shall disclose any record which is contained in a system of records by any means of communication to any person, or to another agency, except pursuant to a written request by, or with the prior written con- sent of, the individual to whom the record pertains, unless ….” If the govern- ment releases information inappropriately, even to another government agency, the affected citizen can sue for damages in civil court. The protections provided by the Privacy Act are sweeping, although not as sweeping as they may seem. Not every government office is in an “agency”; the courts are not, for example. The Act requires agencies to give public notice of the uses to which they will put the information, but the notice can be buried in the
Federal Register where the public probably won’t see it unless news media happen to report it. Then there is the “unless” clause, which includes significant exclusions. For example, the law does not apply to disclosures for statistical, archival, or historical purposes, civil or criminal law enforcement activities, Congressional investigations, or valid Freedom of Information Act requests.
In spite of its exclusions, government practices changed significantly because of this law. Then, a quarter century later, came 9/11. Law enforcement should have seen it all coming, was the constant refrain as investigations revealed how many unconnected dots were in the hands of different govern ment agencies. It all could have been prevented if the investigative fiefdoms had been talking to each other. They should have been able to connect the dots. But they could not in part because the Privacy Act restricted inter-agency data transfers. A response was badly needed. The Department of Homeland Security was created to ease some of the interagency communication problems, but that government reorganization was only a start.
In January 2002, just a few months after the World Trade Center attack, the Defense Advanced Research Projects Agency (DARPA) established the Information Awareness Office (IAO) with a mission to:
imagine, develop, apply, integrate, demonstrate, and transition information technologies, components and prototype, closed-loop, information systems that will counter asymmetric threats by achieving total information awareness useful for preemption; national security warning; and national security decision making. The most serious asymmetric threat facing the United States is terrorism, a threat characterized by collections of people loosely organized in shadowy net- works that are difficult to identify and define. IAO plans to develop technology that will allow understanding of the intent of these net- works, their plans, and potentially define opportunities for disrupting or eliminating the threats. To effectively and efficiently carry this out, we must promote sharing, collaborating, and reasoning to convert nebulous data to knowledge and actionable options.
Vice Admiral John Poindexter directed the effort that came to be known as “Total Information Awareness” (TIA). The growth of enormous private data repositories provided a convenient way to avoid many of the prohibitions of the Privacy Act. The Department of Defense can’t get data from the Internal Revenue Service, because of the 1974 Privacy Act. But they can both buy it from private data aggregators! In a May 2002 email to Adm. Poindexter, Lt. Col Doug Dyer discussed negotiations with Acxiom.
Acxiom’s Jennifer Barrett is a lawyer and chief privacy officer. She’s testified before Congress and offered to provide help. One of the key suggestions she made is that people will object to Big Brother, wide- coverage databases, but they don’t object to use of relevant data for specific purposes that we can all agree on. Rather than getting all the data for any purpose, we should start with the goal, tracking terrorists to avoid attacks, and then identify the data needed (although we can’t define all of this, we can say that our templates and models of terror- ists are good places to start). Already, this guidance has shaped my thinking.
Ultimately, the U.S. may need huge databases of commercial transactions that cover the world or certain areas outside the U.S. This information provides economic utility, and thus provides two reasons why foreign countries would be interested. Acxiom could build this mega- scale database.
The New York Times broke the story in October 2002. As Poindexter had explained in speeches, the government had to “break down the stovepipes” separating agencies, and get more sophisticated about how to create a big picture out of a million details, no one of which might be meaningful in itself. The Times story set off a sequence of reactions from the Electronic Privacy Information Center and civil libertarians. Congress defunded the office in 2003. Yet that was not the end of the idea.
The key to TIA was data mining, looking for connections across disparate data repositories, finding patterns, or “signatures,” that might identify terror- ists or other undesirables. The General Accountability Office report on Data Mining (GAO-04-548) reported on their survey of 128 federal departments. They described 199 separate data mining efforts, of which 122 used personal information.
Although IAO and TIA went away, Project ADVISE at the Department of Homeland Security continued with large-scale profiling system development. Eventually, Congress demanded that the privacy issues concerning this pro- gram be reviewed as well. In his June 2007 report (OIG-07-56), Richard Skinner, the DHS Inspector General, stated that “program managers did not address privacy impacts before implementing three pilot initiatives,” and a few weeks later, the project was shut down. But ADVISE was only one of twelve data-mining projects going on in DHS at the time. Similar privacy concerns led to the cancellation of the Pentagon’s TALON database project. That project sought to compile a database of reports of suspected threats to defense facilities as part of a larger program of domestic counterintelligence. The Transportation Security Administration (TSA) is responsible for airline passenger screening. One proposed system, CAPPS II, which was ultimately terminated over privacy concerns, sought to bring together disparate data sources to determine whether a particular individual might pose a transportation threat. Color-coded assessment tags would determine whether you could board quickly, be subject to further screening, or denied access to air travel. The government creates projects, the media and civil liberties groups raise serious privacy concerns, the projects are cancelled, and new ones arise to take their place. The cycle seems to be endless. In spite of Americans’ traditional suspicions about government surveillance of their private lives, the cycle seems to be almost an inevitable consequence of Americans’ concerns about their security, and the responsibility that government officials feel to use the best available technologies to protect the nation. Corporate databases often contain the best information on the people about whom the government is curious.
Technology Change and Lifestyle ChangeNew technologies enable new kinds of social interactions. There were no sub- urban shopping malls before private automobiles became cheap and widely used. Thirty years ago, many people getting off an airplane reached for cigarettes; today, they reach for cell phones. As Heraclitus is reported to have said 2,500 years ago, “all is flux”—everything keeps changing. The reach-for- your-cell phone gesture may not last much longer, since airlines are starting to provide onboard cell phone coverage.
The more people use a new technology, the more useful it becomes. (This is called a “network effect”; see Chapter 4, “Needles in the Haystack.”) When one of us got the email address lewis@harvard as a second-year graduate student, it was a vainglorious joke; all the people he knew who had email addresses were students in the same office with him. Email culture could not develop until a lot of people had email, but there wasn’t much point in hav- ing email if no one else did.
Technology changes and social changes reinforce each other. Another way of looking at the technological reasons for our privacy loss is to recognize that the social institutions enabled by the technology are now more important than the practical uses for which the technology was originally conceived. Once a lifestyle change catches on, we don’t even think about what it depends on.
Credit Card CultureThe usefulness of the data aggregated by Acxiom and its kindred data aggre- gation services rises as the number of people in their databases goes up, and as larger parts of their lives leave traces in those databases. When credit cards were mostly short-term loans taken out for large purchases, the credit card data was mostly useful for determining your creditworthiness. It is still use- ful for that, but now that many people buy virtually everything with credit cards, from new cars to fast-food hamburgers, the credit card transaction database can be mined for a detailed image of our lifestyles. The information is there, for example, to determine if you usually eat dinner out, how much traveling you do, and how much liquor you tend to consume. Credit card companies do in fact analyze this sort of information, and we are glad they do. If you don’t seem to have been outside Montana in your entire life and you turn up buying a diamond bracelet in Rio de Janeiro, the credit card com- pany’s computer notices the deviation from the norm, and someone may call to be sure it is really you.
The credit card culture is an economic problem for many Americans, who accept more credit card offers than they need, and accumulate more debt than they should. But it is hard to imagine the end of the little plastic cards, unless even smaller RFID tags replace them. Many people carry almost no cash today, and with every easy swipe, a few more bits go into the databases.
Email CultureEmail is culturally in between telephoning and writing a letter. It is quick, like telephoning (and instant messaging is even quicker). It is permanent, like a letter. And like a letter, it waits for the recipient to read it. Email has, to a great extent, replaced both of the other media for person-to-person communication, because it has advantages of both. But it has the problems that other communication methods have, and some new ones of its own.
Phone calls are not intended to last forever, or to be copied and redistributed to dozens of other people, or to turn up in court cases. When we use email as though it were a telephone, we tend to forget about what else might happen to it, other than the telephone-style use, that the recipient will read it and throw it away. Even Bill Gates probably wishes that he had written his corporate emails in a less telephonic voice. After testifying in an antitrust lawsuit that he had not contemplated cutting a deal to divide the web browser market with a competitor, the government produced a candid email he had sent, seeming to contradict his denial: “We could even pay them money as part of the deal, buying a piece of them or something.”
Email is as public as postcards, unless it is encrypted, which it usually is not.
Email is bits, traveling within an ISP and through the Internet, using email software that may keep copies, filter it for spam, or submit it to any other form of inspection the ISP may choose. If your email service provider is Google, the point of the inspection is to attach some appropriate advertising. If you are working within a financial services corporation, your emails are probably logged—even the ones to your grandmother because the company has to be able to go back and do a thorough audit if something inappropriate happens.
Email is as public as postcards, unless it is encrypted, which it usually is not. Employers typically reserve the right to read what is sent through com- pany email. Check the policy of your own employer; it may be hard to find, and it may not say what you expect. Here is Harvard’s policy, for example:
Employees must have no expectation or right of privacy in anything they create, store, send, or receive on Harvard’s computers, networks, or telecommunications systems. …. Electronic files, e-mail, data files, images, software, and voice mail may be accessed at any time by management or by other authorized personnel for any business purpose. Access may be requested and arranged through the system(s) user, however, this is not required.
Employers have good reason to retain such sweeping rights; they have to be able to investigate wrongdoing for which the employer would be liable. As a result, such policies are often less important than the good judgment and ethics of those who administer them. Happily, Harvard’s are generally good. But as a general principle, the more people who have the authority to snoop, the more likely it is that someone will succumb to the temptation.
Commercial email sites can retain copies of messages even after they have been deleted. And yet, there is very broad acceptance of public, free, email ser- vices such as Google’s Gmail, Yahoo! Mail, or Microsoft’s Hotmail. The technology is readily available to make email private: whether you use encryption tools, or secure email services such as Hushmail, a free, web-based email service that incorporates PGP-based encryption (see Chapter 5). The usage of these services, though, is an insignificant fraction of their unencrypted counterparts. Google gives us free, reliable email service and we, in return, give up some space on our computer screen for ads. Convenience and cost trump privacy. By and large, users don’t worry that Google, or its competitors, have all their mail. It’s a bit like letting the post office keep a copy of every letter you send, but we are so used to it, we don’t even think about it.
Web CultureWhen we send an email, we think at least a little bit about the impression we are making, because we are sending it to a human being. We may well say things we would not say face-to-face, and live to regret that. Because we can’t see anyone’s eyes or hear anyone’s voice, we are more likely to over- react and be hurtful, angry, or just too smart for our own good. But because email is directed, we don’t send email thinking that no one else will ever read what we say.
The Web is different. Its social sites inherit their communication culture not from the letter or telephone call, but from the wall in the public square, littered with broadsides and scribbled notes, some of them signed and some not. Type a comment on a blog, or post a photo on a photo album, and your action can be as anonymous as you wish it to be—you do not know to whom your message is going. YouTube has millions of personal videos. Photo- archiving sites are the shoeboxes and photo albums of the twenty-first cen- tury. Online backup now provides easy access to permanent storage for the contents of our personal computers. We entrust commercial entities with much of our most private information, without apparent concern. The gener- ation that has grown up with the Web has embraced social networking in all its varied forms: MySpace, YouTube, LiveJournal, Facebook, Xanga, Classmates.com, Flickr, dozens more, and blogs of every shape and size. More than being taken, personal privacy has been given away quite freely, because everyone else is doing it—the surrender of privacy is more than a way to social connectedness, it is a social institution in its own right. There are 70 million bloggers sharing everything from mindless blather to intimate per- sonal details. Sites like www.loopt.com let you find your friends, while twitter.com lets you tell the entire world where you are and what you are doing. The Web is a confused, disorganized, chaotic realm, rich in both gold and garbage.
The “old” web, “Web 1.0,” as we now refer to it, was just an information resource. You asked to see something, and you got to see it. Part of the dis- inhibition that happens on the new “Web 2.0” social networking sites is due to the fact that they still allow the movie-screen illusion—that we are “just looking,” or if we are contributing, we are not leaving footprints or finger- prints if we use pseudonyms. (See Chapter 4 for more on Web 1.0 and Web 2.0.)
But of course, that is not really the way the Web ever worked. It is important to remember that even Web 1.0 was never anonymous, and even “just looking” leaves fingerprints.
In July 2006, a New York Times reporter called Thelma Arnold of Lilburn, Georgia. Thelma wasn’t expecting the call. She wasn’t famous, nor was she involved in anything particularly noteworthy. She enjoyed her hobbies, helped her friends, and from time to time looked up things on the Web—stuff about her dogs, and her friends’ ailments.
Then AOL, the search engine she used, decided to release some “anonymous” query data. Thelma, like most Internet users, may not have known that AOL had kept every single topic that she, and every other one of their users, had asked about. But it did. In a moment of unenlightened generosity, AOL released for research use a small sample: about 20 million queries from 658,000 different users. That is actually not a lot of data by today’s standards. For example, in July 2007, there were about 5.6 billion search engine queries, of which roughly 340 million were AOL queries. So, 20 million queries comprise only a couple of days’ worth of search queries. In an effort to protect their clients’ privacy, AOL “deidentified” the queries. AOL never mentioned anyone by name; they used random numbers instead. Thelma was 4417149. AOL mistakenly presumed that removing a single piece of personal identification would make it hard to figure out who the users were. It turned out that for some of the users, it wasn’t hard at all.
It didn’t take much effort to match Thelma with her queries. She had searched for “landscapers in Lilburn, GA” and several people with the last name “Arnold,” leading to the obvious question of whether there were any Arnolds in Lilburn. Many of Thelma’s queries were not particularly useful for identifying her, but were revealing nonetheless: “dry mouth,” “thyroid,” “dogs that urinate on everything,” and “swing sets.”
Thelma was not the only person to be identified. User 22690686 (Terri) likes astrology, and the Edison National Bank, Primerica, and Budweiser. 5779844 (Lawanna) was interested in credit reports, and schools. From what he searched for, user 356693 seems to have been an aide to Chris Shays, Congressman from Connecticut.
One of the privacy challenges that we confront as we rummage through the rubble of the digital explosion is that information exists without context. Was Thelma Arnold suffering from a wide range of ailments? One might read- ily conclude that from her searches. The fact is that she often tried to help her friends by understanding their medical problems.
Or consider AOL user 17556639, whose search history was released along with Thelma Arnold’s. He searched for the following:
how to kill your wife 23 Mar, 22:09 wife killer 23 Mar, 22:11
poop 23 Mar, 22:12
dead people 23 Mar, 22:13
pictures of dead people 23 Mar, 22:15
killed people 23 Mar, 22:16
dead pictures 23 Mar, 22:17
murder photo 23 Mar, 22:20
steak and cheese 23 Mar, 22:22
photo of death 23 Mar, 22:30
death 23 Mar, 22:33
dead people photos 23 Mar, 22:33
photo of dead people 23 Mar, 22:35
www.murderdpeople.com 23 Mar, 22:37
decapitated photos 23 Mar, 22:39
car crashes3 23 Mar, 22:40
car crash photo 23 Mar, 22:41
Is this AOL user a potential criminal? Should AOL have called the police? Is 17556639 about to kill his wife? Is he (or she) a researcher with a spelling problem and an interest in Philly cheese steak? Is reporting him to the police doing a public service, or is it an invasion of privacy?
There is no way to tell just from these queries if this user was contemplat- ing some heinous act or doing research for a novel that involves some grisly scenes. When information is incomplete and decontextualized, it is hard to judge meaning and intent.
In this particular case, we happen to know the answer. The user, Jason from New Jersey, was just fooling around, trying to see if Big Brother was watching. He wasn’t planning to kill his wife at all. Inference from incom- plete data has the problem of false positives—thinking you have something that you don’t, because there are other patterns that fit the same data.
Information without context often leads to erroneous conclusions. Because our digital trails are so often retrieved outside the context within which they were created, they sometimes suggest incorrect interpretations. Data interpretation comes with balanced social responsibilities, to protect society when there is evidence of criminal behavior or intent, and also to protect the individual when such evidence is too limited to be reliable. Of course, for every example of misleading and ambiguous data, someone will want to solve the problems it creates by collecting more data, rather than less.
Beyond PrivacyThere is nothing new under the sun, and the struggles to define and enforce privacy are no exception. Yet history shows that our concept of privacy has evolved, and the law has evolved with it. With the digital explosion, we have arrived at a moment where further evolution will have to take place rather quickly.
Leave Me AloneMore than a century ago, two lawyers raised the alarm about the impact technology and the media were having on personal privacy:
Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.”
This statement is from the seminal law review article on privacy, published in 1890 by Boston attorney Samuel Warren and his law partner, Louis Brandeis, later to be a justice of the U.S. Supreme Court. Warren and Brandeis went on, “Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery. To sat- isfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers. To occupy the indolent, column upon column is filled with idle gossip, which can only be procured by intrusion upon the domestic circle.” New technologies made this garbage easy to produce, and then “the supply creates the demand.” And those candid photographs and gossip columns were not merely taste- less; they were bad. Sounding like modern critics of mindless reality TV, Warren and Brandeis raged that society was going to hell in a handbasket because of all that stuff that was being spread about.
Even gossip apparently harmless, when widely and persistently circulated, is potent for evil. It both belittles and perverts. It belittles by inverting the relative importance of things, thus dwarfing the thoughts and aspirations of a people. When personal gossip attains the dignity of print, and crowds the space available for matters of real interest to the community, what wonder that the ignorant and thoughtless mistake its relative importance. Easy of comprehension, appealing to that weak side of human nature which is never wholly cast down by the misfortunes and frailties of our neighbors, no one can be surprised that it usurps the place of interest in brains capable of other things. Triviality destroys at once robustness of thought and delicacy of feeling. No enthusiasm can flourish, no generous impulse can survive under its blighting influence.
The problem they perceived was that it was hard to say just why such inva- sions of privacy should be unlawful. In individual cases, you could say some- thing sensible, but the individual legal decisions were not part of a general regime. The courts had certainly applied legal sanctions for defamation— publishing malicious gossip that was false—but then what about malicious gossip that was true? Other courts had imposed penalties for publishing an individual’s private letters—but on the basis of property law, just as though the individual’s horse had been stolen rather than the words in his letters. That did not seem to be the right analogy either. No, they concluded, such rationales didn’t get to the nub. When something private is published about you, something has been taken from you, you are a victim of theft—but the thing stolen from you is part of your identity as a person. In fact, privacy was a right, they said, a “general right of the individual to be let alone.” That right had long been in the background of court decisions, but the new technologies had brought this matter to a head. In articulating this new right, Warren and Brandeis were, they asserted, grounding it in the principle of “inviolate personhood,” the sanctity of individual identity.
Privacy and FreedomThe Warren-Brandeis articulation of privacy as a right to be left alone was influential, but it was never really satisfactory. Throughout the twentieth century, there were simply too many good reasons for not leaving people alone, and too many ways in which people preferred not to be left alone. And in the U.S., First Amendment rights stood in the way of privacy rights. As a general rule, the government simply cannot stop me from saying anything. In particular, it usually cannot stop me from saying what I want about your private affairs. Yet the Warren-Brandeis definition worked well enough for a long time, because, as Robert Fano put it, “The pace of technological progress was for a long time sufficiently slow as to enable society to learn pragmatically how to exploit new technology and prevent its abuse, with society maintain- ing its equilibrium most of the time.” By the late 1950s, the emerging electronic technologies, both computers and communication, had destroyed that balance. Society could no longer adjust pragmatically, because surveil- lance technologies were developing too quickly.
The result was a landmark study of privacy by the Association of the Bar of the City of New York, which culminated in the publication, in 1967, of a book by Alan Westin, entitled Privacy and Freedom. (Fano was reviewing Westin’s book when he painted the picture of social disequilibrium caused by rapid technological change.) Westin proposed a crucial shift of focus.
Brandeis and Warren had seen a loss of privacy as a form of personal injury, which might be so severe as to cause “mental pain and distress, far greater than could be inflicted by mere bodily injury.” Individuals had to take responsibility for protecting themselves. “Each man is responsible for his own acts and omissions only.” But the law had to provide the weapons with which to resist invasions of privacy.
Westin recognized that the Brandeis-Warren formulation was too absolute, in the face of the speech rights of other individuals and society’s legitimate data-gathering practices. Protection might come not from protective shields, but from control over the uses to which personal information could be put. “Privacy,” wrote Westin, “is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.”
… what is needed is a structured and rational weighing process, with definite criteria that public and private authorities can apply in comparing the claim for disclosure or surveillance through new devices with the claim to privacy. The following are suggested as the basic steps of such a process: measuring the seriousness of the need to con- duct surveillance; deciding whether there are alternative methods to meet the need; deciding what degree of reliability will be required of the surveillance instrument; determining whether true consent to surveillance has been given; and measuring the capacity for limitation and control of the surveillance if it is allowed.
So even if there were a legitimate reason why the government, or some other party, might know something about you, your right to privacy might limit what the knowing party could do with that information.
This more nuanced understanding of privacy emerged from the important social roles that privacy plays. Privacy is not, as Warren and Brandeis had it, the right to be isolated from society—privacy is a right that makes society work. Fano mentioned three social roles of privacy. First, “the right to maintain the privacy of one’s personality can be regarded as part of the right of self-preservation”—the right to keep your adolescent misjudgments and personal conflicts to yourself, as long as they are of no lasting significance to your ultimate position in society. Second, privacy is the way society allows deviations from prevailing social norms,
Privacy is the way society allows deviations from prevailing social norms, given that social progress requires social experimentation.
given that no one set of social norms is universally and permanently satisfactory and indeed, given that social progress requires social experimentation. And third, privacy is essential to the development of independent thought it enables some decoupling of the individual from society, so that thoughts can be shared in limited circles and rehearsed before public exposure.
Privacy and Freedom, and the rooms full of disk drives that sprouted in government and corporate buildings in the 1960s, set off a round of soul- searching about the operational significance of privacy rights. What, in practice, should those holding a big data bank think about when collecting the data, handling it, and giving it to others?
Fair Information Practice PrinciplesIn 1973, the Department of Health, Education, and Welfare issued “Fair Information Practice Principles” (FIPP), as follows:
• Openness. There must be no personal data record-keeping systems whose very existence is secret.
• Disclosure. There must be a way for a person to find out what infor- mation about the person is in a record and how it is used.
• Secondary use. There must be a way for a person to prevent informa- tion about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.
• Correction. There must be a way for a person to correct or amend a record of identifiable information about the person.
• Security. Any organization creating, maintaining, using, or dissemi- nating records of identifiable personal data must assure the reliability of the data for its intended use and must take precautions to prevent misuses of the data.
These principles were proposed for U.S. medical data, but were never adopted. Nevertheless, they have been the foundation for many corporate privacy poli- cies. Variations on these principles have been codified in international trade agreements by the Organization of Economic Cooperation and Development (OECD) in 1980, and within the European Union (EU) in 1995. In the United States, echoes of these principles can be found in some state laws, but federal laws generally treat privacy on a case by case or “sectorial” basis. The 1974 Privacy Act applies to interagency data transfers within the federal govern- ment, but places no limitations on data handling in the private sector. The Fair Credit Reporting Act applies only to consumer credit data, but does not apply to medical data. The Video Privacy Act applies only to videotape rentals, but not to “On Demand” movie downloads, which did not exist when the Act was passed! Finally, few federal or state laws apply to the huge data banks in the file cabinets and computer systems of cities and towns. American government is decentralized, and authority over government data is decentralized as well.
The U.S. is not lacking in privacy laws. But privacy has been legislated inconsistently and confusingly, and in terms dependent on technological contingencies. There is no national consensus on what should be protected, and how protections should be enforced. Without a more deeply informed collective judgment on the benefits and costs of privacy, the current legislative hodgepodge may well get worse in the United States.
The discrepancy between American and European data privacy standards threatened U.S. involvement in international trade, because an EU directive would prohibit data transfers to nations, such as the U.S., that do not meet the European “adequacy” standard for privacy protection. Although the U.S. sectorial approach continues to fall short of European requirements, in 2000 the European Commission created a “safe harbor” for American businesses with multinational operations. This allowed individual corporations to establish their practices are adequate with respect to seven principles, covering notice, choice, onward transfer, access, security, data integrity, and enforcement.
It is, unfortunately, too easy to debate whether the European omnibus approach is more principled than the U.S. piecemeal approach, when the real question is whether either approach accomplishes what we want it to achieve. The Privacy Act of 1974 assured us that obscure statements would be buried deep in the Federal Register, providing the required official notice about massive governmental data collection plans—better than nothing, but providing “openness” only in a narrow and technical sense. Most large corporations doing business with the public have privacy notices, and virtually no one reads them. Only 0.3% of Yahoo! users read its privacy notice in 2002, for example. In the midst of massive negative publicity that year when Yahoo! changed its privacy policy to allow advertising messages, the number of users who accessed the privacy policy rose only to 1%. None of the many U.S. privacy laws prevented the warrantless wiretapping program instituted by the Bush administration, nor the cooperation with it by major U.S. telecommunications companies.
Indeed, cooperation between the federal government and private industry seems more essential than ever for gathering information about drug trafficking and international terrorism, because of yet another technological development. Twenty years ago, most long-distance telephone calls spent at least part of their time in the air, traveling by radio waves between microwave antenna towers or between the ground and a communication satellite. Government eavesdroppers could simply listen in (see the discussion of Echelon in Chapter 5). Now many phone calls travel through fiber optic cables instead, and the government is seeking the capacity to tap this privately owned infrastructure.
High privacy standards have a cost. They can limit the public usefulness of data. Public alarm about the release of personal medical information has led to major legislative remedies. The Health Information Portability and Accountability Act (HIPAA) was intended both to encourage the use of electronic data interchange for health information, and to impose severe penal- ties for the disclosure of “Protected Health Information,” a very broad category including not just medical histories but, for example, medical payments. The bill mandates the removal of anything that could be used to re-connect medical records to their source. HIPAA is fraught with problems in an environment of ubiquitous data and powerful computing. Connecting the dots by assembling disparate data sources makes it extremely difficult to achieve the level of anonymity that HIPAA sought to guarantee. But help is available, for a price, from a whole new industry of HIPAA-compliance advisors. If you search for HIPAA online, you will likely see advertisements for services that will help you protect your data, and also keep you out of jail.
At the same time as HIPAA and other privacy laws have safeguarded our personal information, they are making medical research costly and sometimes impossible to conduct. It is likely that classic studies such as the Framingham Heart Study, on which much public policy about heart disease was founded, could not be repeated in today’s environment of strengthened privacy rules. Dr. Roberta Ness, president of the American College of Epidemiology, reported that “there is a perception that HIPAA may even be having a negative effect on public health surveillance practices.”
The European reliance on the Fair Information Practice Principles is often no more useful, in practice, than the American approach. Travel through London, and you will see many signs saying “Warning: CCTV in use” to meet the “Openness” requirement about the surveillance cameras. That kind of notice throughout the city hardly empowers the individual. After all, even Big Brother satisfied the FIPP Openness standard, with the ubiquitous notices that he was watching! And the “Secondary Use” requirement, that European citi- zens should be asked permission before data collected for one purpose is used for another, is regularly ignored in some countries, although compliance practices are a major administrative burden on European businesses and may cause European businesses at least to pause and think before “repurposing” data they have gathered. Sociologist Amitai Etzioni repeatedly asks European audiences if they have ever been asked for permission to re-use data collected about them, and has gotten only a single positive response—and that was from a gentleman who had been asked by a U.S. company.
The five FIPP principles, and the spirit of transparency and personal control that lay behind them, have doubtless led to better privacy practices. But they have been overwhelmed by the digital explosion, along with the insecurity of the world and all the social and cultural changes that have occurred in daily life. Fred H. Cate, a privacy scholar at the Indiana University, characterizes the FIPP principles as almost a complete bust:
Modern privacy law is often expensive, bureaucratic, burdensome, and offers surprisingly little protection for privacy. It has substituted individual control of information, which it in fact rarely achieves, for privacy protection. In a world rapidly becoming more global through information technologies, multinational commerce, and rapid travel, data protection laws have grown more fractured and protectionist.
Those laws have become unmoored from their principled basis, and the principles on which they are based have become so varied and procedural, that our continued intonation of the FIPPS mantra no longer obscures the fact that this emperor indeed has few if any clothes left.
Privacy as a Right to Control InformationIt is time to admit that we don’t even really know what we want. The bits are everywhere; there is simply no locking them down, and no one really wants to do that anymore. The meaning of pri-
The bits are everywhere; there is simply no locking them down, and no one really wants to do
that anymore.
vacy has changed, and we do not have a good way of describing it. It is not the right to be left alone, because not even the most extreme measures will disconnect our digital selves from the rest of the world. It is not the right to keep our private information to ourselves, because the billions of atomic factoids don’t any more lend themselves into binary classification, private or public.
Reade Seligmann would probably value his privacy more than most Americans alive today. On Monday, April 17, 2006, Seligmann was indicted in connection with allegations that a 27-year-old performer had been raped at a party at a Duke fraternity house. He and several of his lacrosse teammates instantly became poster children for everything that is wrong with
American society—an example of national over-exposure that would leave even Warren and Brandeis breathless if they were around to observe it. Seligmann denied the charges, and at first it looked like a typical he-said, she-said scenario, which could be judged only on credibility and presumptions about social stereotypes.
But during the evening of that fraternity party, Seligmann had left a trail of digital detritus. His data trail indicated that he could not have been at the party long enough, or at the right time, to have committed the alleged rape. Time-stamped photos from the party showed that the alleged victim of his rape was dancing at 12:02 AM. At 12:24 AM, he used his ATM card at a bank, and the bank’s computers kept records of the event. Seligmann used his cell phone at 12:25 AM, and the phone company tracked every call he made, just as your phone company keeps a record of every call you make and receive. Seligmann used his prox card to get into his dormitory room at 12:46 AM, and the university’s computer kept track of his comings and goings, just as other computers keep track of every card swipe or RFID wave you and I make in our daily lives. Even during the ordinary movements of a college student going to a fraternity party, every step along the way was captured in digital detail. If Seligmann had gone to the extraordinary lengths necessary to avoid leaving digital fingerprints—not using a modern camera, a cell phone, or a bank, and living off campus to avoid electronic locks—his defense would have lacked important exculpatory evidence.
Which would we prefer—the new world with digital fingerprints every- where and the constant awareness that we are being tracked, or the old world with few digital footprints and a stronger sense of security from prying eyes? And what is the point of even asking the question, when the world cannot be restored to its old information lock-down?
In a world that has moved beyond the old notion of privacy as a wall around the individual, we could instead regulate those who would inappro- priately use information about us. If I post a YouTube video of myself dancing in the nude, I should expect to suffer some personal consequences. Ultimately, as Warren and Brandeis said, individuals have to take responsibility for their actions. But society has drawn lines in the past around which facts are relevant to certain decisions, and which are not. Perhaps, the border of privacy having become so porous, the border of relevancy could be stronger. As Daniel Weitzner explains:
New privacy laws should emphasize usage restrictions to guard against unfair discrimination based on personal information, even if it’s publicly available. For instance, a prospective employer might be able to find a video of a job applicant entering an AIDS clinic or a mosque. Although the individual might have already made such facts public, new privacy protections would preclude the employer from making a hiring decision based on that information and attach real penalties for such abuse.
In the same vein, it is not intrinsically wrong that voting lists and political contributions are a matter of public record. Arguably, they are essential to the good functioning of the American democracy. Denying someone a promotion because of his or her political inclinations would be wrong, at least for most jobs. Perhaps a nuanced classification of the ways in which others are allowed to use information about us would relieve some of our legitimate fears about the effects of the digital explosion.
In The Transparent Society, David Brin wrote:
Transparency is not about eliminating privacy. It’s about giving us the power to hold accountable those who would violate it. Privacy implies serenity at home and the right to be let alone. It may be irksome how much other people know about me, but I have no right to police their minds. On the other hand I care very deeply about what others do to me and to those I love. We all have a right to some place where we can feel safe.
Despite the very best efforts, and the most sophisticated technologies, we can- not control the spread of our private information. And we often want infor- mation to be made public to serve our own, or society’s purposes.
Yet there can still be principles of accountability for the misuse of infor- mation. Some ongoing research is outlining a possible new web technology, which would help ensure that information is used appropriately even if it is known. Perhaps automated classification and reasoning tools, developed to help connect the dots in networked information systems, can be retargeted to limit inappropriate use of networked information. A continuing border war is likely to be waged, however, along an existing free speech front: the line sep- arating my right to tell the truth about you from your right not to have that information used against you. In the realm of privacy, the digital explosion has left matters deeply unsettled.
Always OnIn 1984, the pervasive, intrusive technology could be turned off:
As O’Brien passed the telescreen a thought seemed to strike him. He stopped, turned aside and pressed a switch on the wall. There was a sharp snap. The voice had stopped.
Julia uttered a tiny sound, a sort of squeak of surprise. Even in the midst of his panic, Winston was too much taken aback to be able to hold his tongue.
“You can turn it off!” he said.
“Yes,” said O’Brien, “we can turn it off. We have that privilege. …Yes, everything is turned off. We are alone.”
Sometimes we can still turn it off today, and should. But mostly we don’t want to. We don’t want to be alone; we want to be connected. We find it con- venient to leave it on, to leave our footprints and fingerprints everywhere, so we will be recognized when we come back. We don’t want to have to keep retyping our name and address when we return to a web site. We like it when the restaurant remembers our name, perhaps because our phone number showed up on caller ID and was linked to our record in their database. We appreciate buying grapes for $1.95/lb instead of $3.49, just by letting the store know that we bought them. We may want to leave it on for ourselves because we know it is on for criminals. Being watched reminds us that they are watched as well. Being watched also means we are being watched over.
And perhaps we don’t care that so much is known about us because that is the way human society used to be—kinship groups and small settlements, where knowing everything about everyone else was a matter of survival. Having it on all the time may resonate with inborn preferences we acquired millennia ago, before urban life made anonymity possible. Still today, privacy means something very different in a small rural town than it does on the Upper East Side of Manhattan.
We cannot know what the cost will be of having it on all the time. Just as troubling as the threat of authoritarian measures to restrict personal liberty is the threat of voluntary conformity. As Fano astutely observed, privacy allows limited social experimentation—the deviations from social norms that are much riskier to the individual in the glare of public exposure, but which can be, and often have been in the past, the leading edges of progressive social changes. With it always on, we may prefer not to try anything unconventional, and stagnate socially by collective inaction.
For the most part, it is too late, realistically, ever to turn it off. We may once have had the privilege of turning it off, but we have that privilege no more. We have to solve our privacy problems another way.
✷
The digital explosion is shattering old assumptions about who knows what. Bits move quickly, cheaply, and in multiple perfect copies. Information that used to be public in principle—for example, records in a courthouse, the price you paid for your house, or stories in a small-town newspaper—is now available to everyone in the world. Information that used to be private and available to almost no one—medical records and personal snapshots, for example—can become equally widespread through carelessness or malice. The norms and business practices and laws of society have not caught up to the change.
The oldest durable communication medium is the written document. Paper documents have largely given way to electronic analogs, from which paper copies are produced. But are electronic documents really like paper documents? Yes and no, and misunderstanding the document metaphor can be costly. That is the story to which we now turn.
CHAPTER 7
You Can’t Say That on the Internet
Guarding the Frontiers of Digital Expression
Do You Know Where Your Child Is on the Web Tonight?
It was every parent’s worst nightmare. Katherine Lester, a 16-year-old honors student from Fairgrove, Michigan, went missing in June 2006. Her parents had no idea what had happened to her; she had never given them a moment’s worry. They called the police. Then federal authorities got involved. After three days of terrifying absence, she was found, safe—in Amman, Jordan.
Fairgrove is too small to have a post office, and the Lesters lived in the last house on a dead-end street. In another time, Katherine’s school, six miles away, might have been the outer limit of her universe. But through the Internet, her universe was—the whole world. Katherine met a Palestinian man, Abdullah Jimzawi, from Jericho on the West Bank. She found his profile on the social networking web site, MySpace, and sent him a message: “u r cute.” They quickly learned everything about each other through online messages. Lester tricked her mother into getting her a passport, and then took off for the Middle East. When U.S. authorities met her plane in Amman, she agreed to return home, and apologized to her parents for the distress she had caused them.
A month later, Representative Judy Biggert of Illinois rose in the House to co-sponsor the Deleting Online Predators Act (DOPA). “MySpace.com and other networking web sites have become new hunting grounds for child predators,” she said, noting that “we were all horrified” by the story of Katherine Lester. “At least let’s give parents some comfort that their children won’t fall prey while using the Internet at schools and libraries that receive federal funding for Internet services.” The law would require those institutions to prevent children from using on-location computers to access chat rooms and social networking web sites without adult supervision.
Speaker after speaker rose in the House to stress the importance of protecting children from online predators, but not all supported the bill. The language was “overbroad and ambiguous,” said one. As originally drafted, it seemed to cover not just MySpace, but sites such as Amazon and Wikipedia. These sites possess some of the same characteristics as MySpace—users can create personal profiles and continually share information with each other using the Web. Although the law might block children in schools and libraries from “places” where they meet friends (and sometimes predators), it would also prevent access to online encyclopedias and bookstores, which rely on content posted by users.
Instead of taking the time to develop a sharper definition of what exactly was to be prohibited, DOPA’s sponsors hastily redrafted the law to omit the definition, leaving it to the Federal Communications Commission to decide later just what the law would cover. Some murmured that the upcoming midterm elections were motivating the sponsors to put forward an ill- considered and showy effort to protect children—an effort that would likely be ineffective and so vague as to be unconstitutional.
Children use computers in lots of places; restricting what happens in schools and libraries would hardly discourage determined teenagers from sneaking onto MySpace. Only the most overbearing parents could honestly answer the question USA Today asked in its article about “cyber-predators”: “It’s 11 p.m. Do you know where your child is on the Web tonight?”
The statistics about what can go wrong were surely terrifying. The Justice Department has made thousands of arrests for “cyber enticement”—almost always older men using social networking web sites to lure teenagers into meetings, some of which end very badly. Yet, as the American Library Association stated in opposition to DOPA, education, not prohibition, is the “key to safe use of the Internet.” Students have to learn to cooperate online, because network use, and all the human interactions it enables, are basic tools of the new, globally interconnected world of business, education, and citizenship.
And perhaps even the globally interconnected world of true love. The tale of Katherine Lester took an unexpected turn. From the moment she was found in Jordan, Lester steadily insisted that she intended to marry Jimzawi. Jimzawi, who was 20 when he and Lester first made contact, claimed to be in love with her—and his mother agreed. Jimzawi begged Lester to tell her parents the truth before she headed off to meet him, but she refused. Upon her return, authorities charged Lester as a runaway child and took her passport away from her. But on September 12, 2007, having attained legal independence by turning 18, she again boarded a plane to the Middle East, finally to meet her beloved face to face. The affair finally ended a few weeks later in an exchange of accusations and denials, and a hint that a third party had attracted Lester’s attentions. There was no high-tech drama to the breakup— except that it was televised on Dr. Phil.
The explosion in digital communications has confounded long-held assumptions about human relationships—how people meet, how they come to know each other, and how they decide if they can trust each other. At the same time, the explosion in digital information, in the form of web pages and downloadable photographs, has put at the fingertips of millions material that only a few years ago no one could have found without great effort and expense. Political dissidents in Chinese Internet cafés can (if they dare) read pro-democracy blogs. People all around the world who are ashamed about their illness, starved for information about their sexual identity, or eager to connect with others of their minority faith can find facts, opinion, advice, and companionship. And children too small to leave home by themselves can see lurid pornography on their families’ home computers. Can societies anymore control what their members see and to whom they talk?
Metaphors for Something Unlike Anything Else
DOPA, which has not been passed into law, is the latest battle in a long war between conflicting values. On the one hand, society has an interest in keeping unwanted information away from children. On the other hand, society as a whole has an interest in maximizing open communication. The U.S. Constitution largely protects the freedom to speak and the right to hear. Over and over, society has struggled to find a metaphor for electronic communication that captures the ways in which it is the same as the media of the past and the ways in which it is different. Laws and regulations are built on traditions; only by understanding the analogies can the speech principles of the past be extended to the changed circumstances of the present or be consciously transcended.
What laws should apply? The Internet is not exactly like anything else. If you put up a web site, that is something like publishing a book, so perhaps the laws about books should apply. But that was Web 1.0 a way for “publishers” to publish and viewers to view. In the dynamic and participatory Web 2.0, sites such as MySpace change constantly in response to user postings. If you send an email, or contribute to a blog, that is something like placing a telephone call, or maybe a conference call, so maybe laws about telephones should be the starting point. Neither metaphor is perfect. Maybe television is a better analogy, since browsing the Web is like channel surfing except that the Internet is two-way, and there is no limit to the number of “channels.”
Underneath the web software and the email software is the Internet itself. The Internet just delivers packets of bits, not knowing or caring whether they are parts of books, movies, text messages, or voices, nor whether the bits will wind up in a web browser, a telephone, or a movie projector. John Perry Barlow, former lyricist for the Grateful Dead and co-founder of the Electronic Frontier Foundation, used a striking metaphor to describe the Internet as it burst into public consciousness in the mid-1990s. The world’s regulation of the flow of information, he said, had long controlled the transport of wine bottles. In “meatspace,” the physical world, different rules applied to books, postal mail, radio broadcasts, and telephone calls—different kinds of bottles. Now the wine itself flowed freely through the network, nothing but bits freed from their packaging. Anything could be put in, and the same kind of thing would come out. But in between, it was all the same stuff—just bits. What are the rules of Cyberspace—what are the rules for the bits themselves?
When information is transmitted between two parties, whether the information is spoken words, written words, pictures, or movies, there is a source and a destination. There may also be some intermediaries. In a lecture hall, the listeners hear the speaker directly, although whoever provided the hall also played an important role in making the communication possible. Books have authors and readers, but also publishers and booksellers in between. It is natural to ascribe similar roles to the various parties in an Internet communication, and, when things go wrong, to hold any and all of the parties responsible. For example, when Pete Solis contacted a 14-year-old girl (“Jane Doe”) through her MySpace profile and allegedly sexually assaulted her when they met in person, the girl’s parents sued MySpace for $30 million for enabling the assault.
The Internet has a complex structure. The source and destination may be friends emailing each other, they may be a commercial web site and a residential customer, or they may be one office of a company sending a mockup of an advertising brochure to another office halfway around the world. The source and destination each has an ISP. Connecting the ISPs are routing switches, fiber optic cables, satellite links, and so on. A packet that flows through the Internet may pass through devices and communication links owned by dozens of different parties. For convenience (and in the style of Jonathan Zittrain), we’ll call the collection of devices that connect the ISPs to each other the cloud. As shown in Figure 7.1, speech on the Internet goes from the source to an ISP, into the cloud, out of the cloud to another ISP, and to its destination (see the sidebar, “Cloud Computing,” in Chapter 3 for additional information about this).
If a government seeks to control speech, it can attack at several different points. It can try to control the speaker or the speaker’s ISP, by criminalizing certain kinds of speech. But that won’t work if the speaker isn’t in the same country as the listener. It can try to control the listener, by prohibiting pos- session of certain kinds of materials. In the U.S., possession of copyrighted software without an appropriate license is illegal, as is possession of other copyrighted material with the intent to profit from redistributing it. If citizens have reasonable privacy rights, however, it is hard for the government to know what its citizens possess. In a society such as the U.S., where citizens have reasonable rights of due process, one-at-a-time prosecutions for possession are unwieldy. As a final alternative, the government can try to control the intermediaries.
There are parallels in civil law. The parents of the Jane Doe sued MySpace because it was in the communication path between Mr. Solis and their daughter, even though MySpace was not the alleged assailant.
DEFAMING PUBLIC FIGURES
Damaging statements about public figures, even if false, are not defamatory unless they were made with malicious intent. This extra clause protects news media against libel claims by celebrities who are offended by the way the press depicts them. It was not always so, however. The pivotal case was New York Times Co. v. Sullivan, 376 U.S. 254 (1964), in which the newspaper was sued by officials in Alabama on the basis of a pro-civil-rights advertisement it published. The story is detailed, along with a readable history of the First Amendment, in Make No Law by Anthony Lewis (Vintage Paperback, 1992). For a later account of First Amendment struggles, see Lewis’s Freedom for the Thought That We Hate (Basic Books, 2008).
Very early, defamation laws had to adapt to the Internet. In the U.S., speech is defamatory if it is false, communicated to third parties, and dam- ages one’s reputation.
In the physical world, when the speaker defames someone, the intermediaries between the speaker and the listener sometimes share responsibility with the speaker—and sometimes not. If we defame someone in this book, we may be sued, but so may the book’s publisher, who might have known that what we were writing was false. On the other hand, the trucker who transported the book to the bookstore probably isn’t liable, even though he too helped get our words from us to our readers. Are the various electronic intermediaries more like publishers, or truckers? Do the parents of Jane Doe have a case against MySpace?
Society has struggled to identify the right metaphors to describe the parties to an electronic communication. To understand this part of the story of electronic information, we have to go back to pre-Internet electronic communication.
Publisher or Distributor?
CompuServe was an early provider of computer services, including bulletin boards and other electronic communities users could join for a fee. One of these fora, Rumorville USA, provided a daily newsletter of reports about broadcast journalism and journalists. CompuServe didn’t screen or even collect the rumors posted on Rumorville. It contracted with a third party, Don Fitzpatrick Associates (DFA), to provide the content. CompuServe simply posted whatever DFA provided without reviewing it. And for a long time, no one complained.
In 1990, a company called Cubby, Inc. started a competing service, Skuttlebut, which also reported gossip about TV and radio broadcasting. Items appeared on Rumorville describing Skuttlebut as a “new start-up scam” and alleging that its material was being stolen from Rumorville. Cubby cried foul and went after CompuServe, claiming defamation. CompuServe acknowledged that the postings were defamatory, but claimed it was not acting as a publisher of the information—just a distributor. It simply was sending on to subscribers what other people gave it. It wasn’t responsible for the contents, any more than a trucker is responsible for libel that might appear in the magazines he handles.
What was the right analogy? Was CompuServe more like a newspaper, or more like the trucker who transports the newspaper to its readers?
More like the trucker, ruled the court. A long legal tradition held distributors blameless for the content of the publications they delivered. Distributors can’t be expected to have read all the books on their trucks. Grasping for a better analogy, the court described CompuServe as “an electronic for-profit library.” Distributor or library, CompuServe was independent of DFA and couldn’t be held responsible for libelous statements in what DFA provided. The case of Cubby v. CompuServe was settled decisively in CompuServe’s favor. Cubby might go after the source, but that wasn’t CompuServe. CompuServe was a blameless intermediary. So was MySpace, years later, when Jane Doe’s parents sought redress for Mr. Solis’s alleged assault of their daughter. In a ruling building on the Cubby decision, MySpace was absolved of responsibility for what Solis had posted.
When Cubby v. CompuServe was decided, providers of computer services everywhere exhaled. If the decision had gone the other way, electronic distribution of information might have become a risky business that few dared to enter. Computer networks created an information infrastructure unprecedented in its low overhead. A few people could connect tens of thousands, even millions, to each other at very low cost. If everything disseminated had to be reviewed by human readers before it was posted, to ensure that any damaging statements were truthful, its potential use for participatory democracy would be severely limited. For a time, a spirit of freedom ruled.
Neither Liberty nor Security
“The law often demands that we sacrifice some liberty for greater security. Sometimes, though, it takes away our liberty to provide us less security.” So wrote law professor Eugene Volokh in the fall of 1995, commenting on a court case that looked similar to Cubby v. CompuServe, but wasn’t.
Eugene Volokh has a blog, volokh.com, in which he comments regularly on information freedom issues and many other things. |
Prodigy was a provider of computer services, much like CompuServe. But in the early 1990s, as worries began to rise about the sexual content of materials available online, Prodigy sought to distinguish itself as a family-oriented service. It pledged to exercise editorial control over the postings on its bulletin boards. “We make no apology,” Prodigy stated, “for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less….” Prodigy’s success in the market was due in no small measure to the security families felt in accessing its fora, rather than the anything-goes sites offered by other services.
One of Prodigy’s bulletin boards, called “Money Talk,” was devoted to financial services. In October 1994, someone anonymously posted comments on Money Talk about the securities investment firm Stratton Oakmont. The firm, said the unidentified poster, was involved in “major criminal fraud.” Its president was “soon to be proven criminal.” The whole company was a “cult of brokers who either lie for a living or get fired.”
Stratton Oakmont sued Prodigy for libel, claiming that Prodigy should be regarded as the publisher of these defamatory comments. It asked for $200 million in damages. Prodigy countered that it had zero responsibility for what its posters said. The matter had been settled several years earlier by the CubbyCompuServe decision. Prodigy wasn’t the publisher of the comments, just the distributor.
In a decision that stunned the Internet community, a New York court ruled otherwise. By exercising editorial control in support of its family-friendly image, said the court, Prodigy became a publisher, with the attendant responsibilities and risks. Indeed, Prodigy had likened itself to a newspaper publisher, and could not at trial claim to be something less.
It was all quite logical, as long as the choice was between two metaphors: distributor or newspaper. In reality, though, a service provider wasn’t exactly like either. Monitoring for bad language was a pretty minor form of editorial work. That was a far cry from checking everything for truthfulness.
Be that as it may, the court’s finding undercut efforts to create safe districts in Cyberspace. After the decision, the obvious advice went out to bulletin board operators: Don’t even consider editing or censoring. If you do, Stratton Oakmont v. Prodigy means you may be legally liable for any malicious false- hood that slips by your review. If you don’t even try, Cubby v. CompuServe means you are completely immune from liability.
This was fine for the safety of the site operators, but what about the public interest? Freedom of expression was threatened, since fewer families would be willing to roam freely through the smut that would be posted. At the same time, security would not be improved, since defamers could always post their lies on the remaining services with their all-welcome policies.
The Nastiest Place on Earth
Every communication technology has been used to control, as well as to facilitate, the flow of ideas. Barely a century after the publication of the Gutenberg Bible, Pope Paul IV issued a list of 500 banned authors. In the United States, the First Amendment protects authors and speakers from government interference: Congress shall make no law … abridging the freedom of speech, or of the press …. But First Amendment protections are not absolute. No one has the right to publish obscene materials. The government can destroy materials it judges to be obscene, as postal authorities did in 1918 when they burned magazines containing excerpts of James Joyce’s Ulysses.
What exactly counts as obscene has been a matter of much legal wrangling over the course of U.S. history. The prevailing standard today is the one the Supreme Court used in 1973 in deciding the case of Miller v. California, and is therefore called the Miller Test. To determine whether material is obscene, a court must consider the following:
- Whether the average person, applying contemporary community standards, would find that the work, taken as a whole, appeals to the prurient interest.
- Whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law.
- Whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.
Only if the answer to each part is “yes” does the work qualify as obscene. The Miller decision was a landmark, because it established that there were no national standards for obscenity. There were only “community” standards, which could be different in Mississippi than in New York City. But there were no computer networks in 1973. What is a “community” in Cyberspace?
In 1992, the infant World Wide Web was hardly world-wide, but many Americans were using dial-up connections to access information on centralized, electronic bulletin boards. Some bulletin boards were free and united communities of interest—lovers of baseball or birds, for example. Others distributed free software. Bob and Carleen Thomas of Milpitas, California, ran a different kind of bulletin board, called Amateur Action. In their advertising, they described it as “The Nastiest Place on Earth.”
For a fee, anyone could download images from Amateur Action. The pictures were of a kind not usually shown in polite company, but readily available in magazines sold in the nearby cities of San Francisco and San Jose. The Thomases were raided by the San Jose police, who thought they might have been distributing obscene materials. After looking at their pictures, the police decided that the images were not obscene by local standards.
Bob and Carleen were not indicted, and they added this notice to their bulletin board: “The San Jose Police Department as well as the Santa Clara County District Attorney’s Office and the State of California agree that Amateur Action BBS is operating in a legal manner.”
Two years later, in February 1994, the Thomases were raided again, and their computer was seized. This time, the complaint came from Agent David Dirmeyer, a postal inspector—in western Tennessee. Using an assumed name, Dirmeyer had paid $55 and had downloaded images to his computer in Memphis. Nasty stuff indeed, particularly for Memphis: bestiality, incest, and sado-masochism. The Thomases were arrested. They stood trial in Memphis on federal charges of transporting obscene material via common carrier, and via interstate commerce. They were convicted by a Tennessee jury, which concluded that their Milpitas bulletin board violated the community standards of Memphis. Bob was sentenced to 37 months incarceration and Carleen to 30.
The Thomases appealed their conviction, on the grounds that they could not have known where the bits were going, and that the relevant community, if not San Jose, was a community of Cyberspace. The appeals court did not agree. Dirmeyer had supplied a Tennessee postal address when he applied for membership in Amateur Action. The Thomases had called him at his Memphis telephone number to give him the password—they had known where he was. The Thomases, concluded the court, should have been more careful where they sent their bits, once they started selling them out of state. Shipping the bits was just like shipping a videotape by UPS (a charge of which the Thomases were also convicted). The laws of meatspace applied to Cyberspace—and one city’s legal standards sometimes applied thousands of miles away.
The Most Participatory Form of Mass Speech
Pornography was part of the electronic world from the moment it was possible to store and transmit words and images. The Thomases learned that bits were like books, and the same obscenity standards applied.
In the mid-1990s, something else happened. The spread of computers and networks vastly increased the number of digital images available and the number of people viewing them. Digital pornography became not just the same old thing in a new form—it seemed to be a brand-new thing, because there was so much of it and it was so easy to get in the privacy of the home. Nebraska Senator James Exon attached an anti-Internet-pornography amendment to a telecommunications bill, but it seemed destined for defeat on civil liberties grounds. And then all hell broke loose.
On July 3, 1995, Time Magazine blasted “CYBERPORN” across its cover. The accompanying story, based largely a single university report, stated:
What the Carnegie Mellon researchers discovered was: THERE’S AN AWFUL LOT OF PORN ONLINE. In an 18-month study, the team surveyed 917,410 sexually explicit pictures, descriptions, short stories, and film clips. On those Usenet newsgroups where digitized images are stored, 83.5% of the pictures were pornographic.
The article later noted that this statistic referred to only a small fraction of all data traffic but failed to explain that the offending images were mostly on limited-access bulletin boards, not openly available to children or anyone else. It mentioned the issue of government censorship, and it quoted John Perry Barlow on the critical role of parents. Nonetheless, when Senator Grassley of Iowa read the Time Magazine story into the Congressional Record, attributing its conclusions to a study by the well-respected Georgetown University Law School, he called on Congress to “help parents who are under assault in this day and age” and to “help stem this growing tide.”
Grassley’s speech, and the circulation in the Capitol building of dirty pictures downloaded by a friend of Senator Exon, galvanized the Congress to save the children of America. In February 1996, the Communications Decency Act, or CDA, passed almost unanimously and was signed into law by President Clinton.
The CDA made it a crime to use “any interactive computer service to dis- play in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in con- text, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs.” Criminal penalties would also fall on anyone who “knowingly permits any telecommunications facility under such person’s control to be used” for such prohibited activities. And finally, it criminalized the transmission of materials that were “obscene or indecent” to persons known to be under 18.
These “display provisions” of the CDA vastly extended existing anti- obscenity laws, which already applied to the Internet. The dual prohibitions against making offensive images available to a person under 18, and against transmitting indecent materials to persons known to be under 18, were unlike anything that applied to print publications. “Indecency,” whatever it meant, was something short of obscenity, and only obscene materials had been illegal prior to the CDA. A newsstand could tell the difference between a 12-year-old customer and a 20-year-old, but how could anyone check ages in Cyberspace?
When the CDA was enacted, John Perry Barlow saw the potential of the Internet for the free flow of information challenged. He issued a now-classic manifesto against the government’s effort to regulate speech:
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You have no sovereignty where we gather…. We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity…. In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits…. [Y]ou are trying to ward off the virus of liberty by erecting guard posts at the frontiers of Cyberspace.
Brave and stirring words, even if the notion of Cyberspace as a “seamless whole” had already been rendered doubtful. At a minimum, bits had to meet different obscenity standards in Memphis than in Milpitas, as the Thomases had learned. In fact, the entire metaphor of the Internet as a “space” with “frontiers” was fatally flawed, and misuse of that metaphor continues to plague laws and policies to this day.
Civil libertarians joined the chorus challenging the Communications Decency Act. In short order, a federal court and the U.S. Supreme Court ruled in the momentous case of ACLU v. Reno. The display provisions of the CDA were unconstitutional. “The Government may only regulate free speech for a compelling reason,” wrote Judge Dalzell in the district court decision, “and in the least restrictive manner.” It would chill discourse unacceptably to demand age verification over the Internet from every person who might see material that any adult has a legal right to see.
The government had argued that the authority of the Federal Communications Commission (FCC) to regulate the content of TV and radio broadcasts, which are required not to be “indecent,” provided an analogy for government oversight of Internet communications.
The courts disagreed. The FCC analogy was wrong, they ruled, because the Internet was far more open than broadcast media. Different media required different kinds of laws, and the TV and radio laws were more restrictive than laws were for print media, or should be for the Internet. “I have no doubt” wrote Judge Dalzell, “that a Newspaper Decency Act, passed because Congress discovered that young girls had read a front page article in the New York Times on female genital mutilation in Africa, would be unconstitutional…. The Internet may fairly be regarded as a never-ending worldwide conversation. The Government may not, through the CDA, interrupt that conversation. As the most participatory form of mass speech yet developed, the Internet deserves the highest protection from governmental intrusion.” The CDA’s dis- play provisions were dead.
In essence, the court was unwilling to risk the entire Internet’s promise as a vigorous marketplace of ideas to serve the narrow purpose of protecting children from indecency. Instead, it transferred the burden of blocking unwanted communications from source ISPs to the destination. The DOPA’s proposed burden on libraries and schools is heir to the court’s ruling over- turning the CDA. Legally, there seemed to be nowhere else to control speech except at the point where it came out of the cloud and was delivered to the listener.
DEFENDING ELECTRONIC FREEDOMS The Electronic Frontier Foundation, www.eff.org, is the leading public advocacy group defending First Amendment and other personal rights in Cyberspace. Ironically, it often finds itself in opposition with media and telecommunications companies. In principle, communications companies should have the greatest interest in unfettered exchange of information. In actual practice, they often benefit financially from policies that limit consumer choice or expand surveillance and data-gathering about private citizens. The EFF was among the plaintiffs bringing suit in the case that overturned the CDA. |
Lost in the 1995–96 Internet indecency hysteria was the fact that the “Carnegie Mellon report” that started the legislative ball rolling had been discredited almost as soon as the Time Magazine story appeared. The report’s author, Martin Rimm, was an Electrical Engineering undergraduate. His study’s methodology was flawed, and perhaps fraudulent. For example, he told adult bulletin board operators that he was studying how best to market pornography online, and that he would repay them for their cooperation by sharing his tips. His conclusions were unreliable. Why hadn’t that been caught when his article was published? Because the article was not a product of Georgetown University, as Senator Grassley had said. Rather, it appeared in the Georgetown Law Review, a student publication that used neither peer nor professional reviewers. Three weeks after publishing the “Cyberporn” article, Time acknowledged that Rimm’s study was untrustworthy. In spite of this repudiation, Rimm salvaged something from his efforts: He published a book called The Pornographer’s Handbook: How to Exploit Women, Dupe Men, & Make Lots of Money.
Protecting Good Samaritans—and a Few Bad Ones
The Stratton Oakmont v. Prodigy decision, which discouraged ISPs from exer- cising any editorial judgment, had been handed down in 1995, just as Congress was preparing to enact the Communications Decency Act to protect children from Internet porn. Congress recognized that the consequences of Stratton Oakmont would be fewer voluntary efforts by ISPs to screen their sites for offensive content. So, the bill’s sponsors added a “Good Samaritan” provision to the CDA.
The intent was to allow ISPs to act as editors without running the risk that they would be held responsible for the edited content, thus putting themselves in the jam in which Prodigy had found itself. So the CDA included a provision absolving ISPs of liability on account of anything they did, in good faith, to filter out “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” material. For good measure, the CDA pushed the Cubby court’s “distributor” metaphor to the limit, and beyond. ISPs should not be thought of as publishers, or as sources either. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This was the bottom line of §230 of the CDA, and it meant that there would be no more Stratton Oakmont v. Prodigy Catch-22s.
When the U.S. Supreme Court struck down the CDA in 1996, it negated only the display provisions, the clauses that threatened the providers of “indecent” content. The Good Samaritan clause was allowed to stand and remains the law today. ISPs can do as much as they want to filter or censor their content, without any risk that they will assume publishers’ liabilities in the process.
THE CDA AND DISCRIMINATION The “Good Samaritan” clause envisioned a sharp line between “service providers” (which got immunity) and “content providers” (which did not). But as the technology world evolved, the distinction became fuzzy. A room- mate-matching service was sued in California, on the basis that it invited users to discriminate by categorizing their roommate preferences (women only, for example). A court ruled that the operators of the web site were immune as service providers. An appeals court reversed the decision, on the basis that the web site became a content provider by filtering the information applicants provided—people seeking female roommates would not learn about men looking for roommates. There was nothing wrong with that, but the principle that the roommate service had blanket protection, under the CDA, to filter as it wished would mean that with equal impunity, it could ask about racial preferences and honor them. That form of discrimination would be illegal in newspaper ads. “We doubt,” wrote the appeals court judge, “this is what Congress had in mind when it passed the CDA.” |
Or as little as they choose, as Ken Zeran learned to his sorrow a few years later. The worst terrorist attack in history on U.S. soil prior to the 2001 destruction of New York’s World Trade Center was the bombing of the Alfred P. Murrah Federal building in Oklahoma City on April 19, 1995. 168 people were killed, some of them children in a day care center. Hundreds more were injured when the building collapsed around them and glass and rubble rained down on the neighborhood. One man who made it out alive likened the event to the detonation of an atomic bomb.
Less than a week later, someone with screen name “Ken ZZ03” posted an advertisement on an America On Line (AOL) bulletin board. Ken had “Naughty Oklahoma T-Shirts” for sale. Among the available slogans were “Visit Oklahoma—it’s a Blast” and “Rack’em, Stack’em, and Pack’em—Oklahoma 1995.” Others were even cruder and more tasteless. To get your T-shirt, said the ads, you should call Ken. The posting gave Ken’s phone number.
The number belonged to Ken Zeran, an artist and filmmaker in Seattle, Washington. Zeran had nothing to do with the posting on AOL. It was a hoax.
Ken Zeran started to receive calls. Angry, insulting calls. Then death threats.
Zeran called AOL and asked them to take down the posting and issue a retraction. An AOL employee promised to take down the original posting, but said retractions were against company policy.
The next day, an anonymous poster with a slightly different screen name offered more T-shirts for sale, with even more offensive slogans.
Call Ken. And by the way—there’s high demand. So if the phone is busy, call back.
Zeran kept calling AOL to ask that the postings be removed and that further postings be prevented. AOL kept promising to close down the accounts and remove the postings, but didn’t. By April 30, Ken was receiving a phone call every two minutes. Ken’s art business depended on that phone number— he couldn’t change it or fail to answer it, without losing his livelihood.
About this time, Shannon Fullerton, the host of a morning drive-time radio talk show on KRXO in Seattle, received by email a copy of one of the postings. Usually his show was full of light-hearted foolishness, but after the bombing, Fullerton and his radio partner had devoted several shows to sharing community grief about the Oklahoma City tragedy. Fullerton read Ken’s T-shirt slogans over the air. And he read Ken’s telephone number and told his listeners to call Ken and tell him what they thought of him.
Zeran got even more calls, and more death threats. Fearing for his safety, he obtained police surveillance of his home. Most callers were not interested in hearing what Ken had to say when he answered the phone, but he managed to keep one on the line long enough to learn about the KRXO broadcast. Zeran contacted the radio station. KRXO issued a retraction, after which the number of calls Ken received dropped to fifteen per day. Eventually, a newspaper exposed the hoax. AOL finally removed the postings, after leaving them visible for a week. Ken’s life began to return to normal.
WAS THE RADIO STATION LIABLE? Zeran sued the radio station separately, but failed in that effort as well. Much as he may have suffered, reasoned the court, it wasn’t defamation, because none of the people who called him even knew who Ken Zeran was so his reputation couldn’t possibly have been damaged when the radio station spoke ill of “Ken”! |
Zeran sued AOL, claiming defamation, among other things. By putting up the postings, and leaving them up long after it had been informed that they were false, AOL had damaged him severely.
The decision went against Zeran, and the lower court’s decision held up on appeal. AOL certainly had behaved like a publisher, by communicating the postings in the first place and by choosing not to remove them when informed that they were fraudulent. Unlike the defendant in the Cubby v. CompuServe case, AOL knew exactly what it was publishing. But the Good Samaritan provision of the CDA specifically stated that AOL should not legally be treated as a publisher. AOL had no liability for Zeran’s woes.
Zeran’s only recourse was to identify the actual speaker, the pseudonymous Ken ZZ03 who made the postings. And AOL would not help him do that. Everyone felt sorry for Ken, but the system gave him no help.
The posters could evade responsibility as long as they remained anonymous, as they easily could on the Internet. And Congress had given the ISPs a complete waiver of responsibility for the consequences of false and damaging statements, even when the ISP knew they were false. Had anyone in Congress thought through the implications of the Good Samaritan clause?
Laws of Unintended Consequences
The Good Samaritan provision of the CDA has been the friend of free speech, and a great relief to Internet Service Providers. Yet its application has defied logical connection to the spirit that created it.
Sidney Blumenthal was a Clinton aide whose job it was to dish dirt on the president’s enemies. On August 11, 1997, conservative online columnist Matt Drudge reported, “Sidney Blumenthal has a spousal abuse past that has been effectively covered up.” The White House denied it, and the next day Drudge withdrew the claim. The Blumenthals sued AOL, which had a deal with Drudge. And had deeper pockets—the Blumenthals asked for $630,000,021. AOL was as responsible for the libel as Drudge, claimed the Blumenthals, because AOL could edit what Drudge supplied. AOL could even insist that Drudge delete items AOL did not want posted. The court sided with AOL, and cited the Good Samaritan clause of the CDA. AOL couldn’t be treated like a publisher, so it couldn’t be held liable for Drudge’s falsehoods. Case closed.
Even more strangely, the Good Samaritan clause of the Communications Decency Act has been used to protect an ISP whose chat room was being used to peddle child pornography.
In 1998, Jane and John Doe, a mother and her minor son, sued AOL for harm inflicted on the son. The Does alleged that AOL chat rooms were used to sell pornographic images of the boy made when he was 11 years old. They claimed that in 1997, Richard Lee Russell had lured John and two other boys to engage in sexual activities with each other and with Russell. Russell then used AOL chat rooms to market photographs and videotapes of these sexual encounters.
Jane Doe complained to AOL. Under the terms of its agreement with its users, AOL specifically reserved the right to terminate the service of anyone engaged in such improper activities. And yet AOL did not suspend Russell’s service, or even warn him to stop what he was doing. The Does wanted compensation from AOL for its role in John Doe’s sexual abuse.
The Does lost. Citing the Good Samaritan clause, and the precedent of the Zeran decision, the Florida courts held AOL blameless. Online service providers who knowingly allow child pornography to be marketed on their bulletin boards could not be treated as though they had published ads for kiddie porn.
The Does appealed and lost again. The decision in AOL’s favor was 4-3 at the Florida Supreme Court. Judge J. Lewis fairly exploded in his dissenting opinion.
The Good Samaritan clause was an attempt to remove disincentives from the development of filtering and blocking technologies, which would assist parents in their efforts to protect children. “[I]t is inconceivable that Congress intended the CDA to shield from potential liability an ISP alleged to have taken absolutely no actions to curtail illicit activities … while profiting from its customer’s continued use of the service.” The law had been transformed into one “which both condones and exonerates a flagrant and reprehensible failure to act by an ISP in the face of … material unquestionably harmful to children.” This made no sense. The sequence of decisions “thrusts Congress into the unlikely position of having enacted legislation that encourages and protects the involvement of ISPs as silent partners in criminal enterprises for profit.”
The problem, as Judge Lewis saw it, was that it wasn’t enough to say that ISPs were not like publishers. They really were more like distributors—as Ken Zeran had tried to argue—and distributors are not entirely without responsibility for what they distribute. A trucker who knows he is carrying child pornography, and is getting a cut of the profits, has some legal liability for his complicity in illegal commerce. His role is not that of a publisher, but it is not nothing either. The Zeran court had created a muddle by using the wrong analogy. Congress had made the muddle possible by saying nothing about the right analogy after saying that publishing was the wrong one.
Can the Internet Be Like a Magazine Store?
After the display provision of the CDA was ruled unconstitutional in 1997, Congress went back to work to protect America’s children. The Child Online Protection Act (COPA), passed into law in 1998, contained many of the key elements of the CDA, but sought to avoid the CDA’s constitutional problems by narrowing it. It applied only to “commercial” speech, and criminalized knowingly making available to minors “material harmful to minors.” For the purposes of this law, a “minor” was anyone under 17. The statute extended the Miller Test for obscenity to create a definition of material that was not obscene but was “harmful to minors:”
The term “material that is harmful to minors” means any communication … that — (A) the average person, applying contemporary community standards, would find, taking the material as a whole and with respect to minors, is designed to appeal to … the prurient interest; (B) depicts, describes, or represents, in a manner patently offensive with respect to minors, … [a] sexual act, or a lewd exhibition of the genitals or post-pubescent female breast; and (C) taken as a whole, lacks serious literary, artistic, political, or scientific value for minors.
COPA was challenged immediately and never took effect. A federal judge enjoined the government from enforcing it, ruling that it was likely to be unconstitutional. The matter bounced between courts through two presidencies. The case started out as ACLU v. Reno, for a time was known as ACLU v. Ashcroft, and was decided as ACLU v. Gonzalez. The judges were uniformly sympathetic to the intent of Congress to protect children from material they should not see. But in March 2007, the ax finally fell on COPA. Judge Lowell A. Reed, Jr., of U.S. District Court for the Eastern District of Pennsylvania, confirmed that the law went too far in restricting speech.
Part of the problem was with the vague definition of material “harmful to minors.” The prurient interests of a 16-year-old were not the same as those of an 8-year-old; and what had literary value for a teenager might be value- less for a younger child. How would a web site designer know which standard he should use to avoid the risk of imprisonment?
But there was an even more basic problem. COPA was all about keeping away from minors material that would be perfectly legal for adults to have. It put a burden on information distributors to ensure that recipients of such information were of age. COPA provided a “safe harbor” against prosecution for those who in good faith checked the ages of their customers. Congress imagined a magazine store where the clerks wouldn’t sell dirty magazines to children who could not reach the countertop, and might ask for identification of any who appeared to be of borderline age. The law envisioned that some- thing similar would happen in Cyberspace:
It is an affirmative defense to prosecution under this section that the defendant, in good faith, has restricted access by minors to material that is harmful to minors (A) by requiring use of a credit card, debit account, adult access code, or adult personal identification number; (B) by accepting a digital certificate that verifies age; or (C) by any other reasonable measures that are feasible under available technology.
The big problem was that these methods either didn’t work or didn’t even exist. Not every adult has a credit card, and credit card companies don’t want their databases used to check customers’ ages. And if you don’t know what is meant by an “adult personal identification number” or a “digital certificate that verifies age,” don’t feel badly—neither do we. Clauses (B) and (C) were basically a plea from Congress for the industry to come up with some technical magic for determining age at a distance.
In the state of the art, however, computers can’t reliably tell if the party on the other end of a communications link is human or is another computer. For a computer to tell whether a human is over or under the age of 17, even imperfectly, would be very hard indeed. Mischievous 15-year-olds could get around any simple screening system that could be used in the home. The Internet just isn’t like a magazine store.
Even if credit card numbers or personal identification systems could distinguish children from adults, Judge Reed reasoned, such methods would intimidate computer users. Fearful of identity theft or government surveil- lance, many computer users would refuse interrogation and would not reveal personal identifying information as the price for visiting web sites deemed “harmful to minors.” The vast electronic library would, in practice, fall into disuse and start to close down, just as an ordinary library would become use- less if everyone venturing beyond the children’s section had to endure a background check.
Congress’s safe harbor recommendations, concluded Judge Reed, if they worked at all, would limit Internet speech drastically. Information adults had a right to see would, realistically, become unavailable to them. The filtering technologies noted when the CDA was struck down had improved, so the government could not credibly claim that limiting speech was the only possible approach to protecting children. And even if the free expression concerns were calmed or ignored, and even if everything COPA suggested worked perfectly, plenty of smut would still be available to children. The Internet was borderless, and COPA’s reach ended at the U.S. frontier. COPA couldn’t stop the flood of harmful bits from abroad.
Summing up, Reed quoted the thoughts of Supreme Court Justice Kennedy about a flag-burning case. “The hard fact is that sometimes we must make decisions we do not like. We make them because they are right, right in the sense that the law and the Constitution, as we see them, compel the result.” Much as he was sympathetic to the end of protecting children from harmful communications, Judge Reed concluded, “perhaps we do the minors of this country harm if First Amendment protections, which they will with age inherit fully, are chipped away in the name of their protection.”
Let Your Fingers Do the Stalking
Newsgroups for sharing sexual information and experiences started in the early 1980s. By the mid-90s, there were specialty sites for every orientation and inclination. So when a 28-year-old woman entered an Internet chat room in 1998 to share her sexual fantasies, she was doing nothing out of the ordinary. She longed to be assaulted, she said, and invited men reading her email to make her fantasy a reality. “I want you to break down my door and rape me,” she wrote.
What was unusual was that she gave her name and address—and instructions about how to get past her building’s security system. Over a period of several weeks, nine men took up her invitation and showed up at her door, often in the middle of the night. When she sent them away, she followed up with a further email to the chat room, explaining that her rejections were just part of the fantasy.
In fact, the “woman” sending the emails was Gary Dellapenta, a 50-year- old security guard whose attentions the actual woman had rebuffed. The victim of this terrifying hoax did not even own a computer. Dellapenta was caught because he responded directly to emails sent to entrap him. He was convicted and imprisoned under a recently enacted California anti- “cyber- stalking” statute. The case was notable not because the events were unusual, but because it resulted in a prosecution and conviction. Most victims are not so successful in seeking redress. Most states lacked appropriate laws, and most victims could not identify their stalkers. Sometimes the stalker did not even know the victim—but simply found her contact information somewhere in Cyberspace.
Speeches and publications with frightening messages have long received First Amendment protections in the U.S., especially when their subject is political. Only when a message is likely to incite “imminent lawless action” (in the words of a 1969 Supreme Court decision) does speech become illegal— a test rarely met by printed words. This high threshold for government intervention builds on a “clear and present danger” standard explained most eloquently by Justice Louis Brandeis in a 1927 opinion. “Fear of serious injury cannot alone justify suppression of free speech …. No danger flowing from speech can be deemed clear and present, unless the incidence of the evil apprehended is so imminent that it may befall before there is opportunity for full discussion.”
Courts apply the same standard to web sites. An anti-abortion group listed the names, addresses, and license plate numbers of doctors performing abortions on a web site called the “Nuremberg Files.” It suggested stalking the doctors, and updated the site by graying out the names of those who had been wounded and crossing off those who had been murdered. The web site’s creators acknowledged that abortion was legal, and claimed not to be threatening anyone, only collecting dossiers in the hope that the doctors could at some point in the future be held accountable for “crimes against humanity.”
The anti-abortion group was taken to court in a civil action. After a long legal process, the group was found liable for damages because “true threats of violence were made with the intent to intimidate.”
The courts had a very difficult time with the question of whether the Nuremberg Files web site was threatening or not, but there was nothing intrinsic to the mode of publication that complicated that decision. In fact, the same group had issued paper “WANTED” posters, which were equally part of the materials at issue. Reasonable jurists could, and did, come to different conclusions about whether the text on the Nuremberg Files web site met the judicial threshold.
But the situation of Dellapenta’s victim, and other women in similar situations, seemed to be different. The scores being settled at their expense had no political dimensions. There were already laws against stalking and telephone harassment; the Internet was being used to recruit proxy stalkers and harassers. Following the lead of California and other states, Congress passed a federal anti-cyberstalking law.
Like an Annoying Telephone Call?
The “2005 Violence Against Women and Department of Justice Reauthorization Act” (signed into law in early 2006) assigned criminal penalties to any- one who “utilizes any device or software that can be used to originate telecommunications or other types of communications that are transmitted, in whole or in part, by the Internet … without disclosing his identity and with intent to annoy, abuse, threaten, or harass any person….” The clause was little noticed when the Act was passed in the House on a voice vote and in the Senate unanimously.
Civil libertarians again howled, this time about a single word in the legislation. It was fine to outlaw abuse, threats, and harassment by Internet. Those terms had some legal history. Although it was not always easy to tell whether the facts fit the definitions, at least the courts had standards for judging what these words meant.
But “annoy”? People put lots of annoying things on web sites and say lots of annoying things in chat rooms. There is even a web site, annoy.com, devoted to posting annoying political messages anonymously. Could Congress really have intended to ban the use of the Internet to annoy people?
Congress had extended telephone law to the Internet, on the principle that harassing VoIP calls should not receive more protection than harassing land- line telephone calls. In using broad language for electronic communications, however, it created another in the series of legal muddles about the aptness of a metaphor.
The Telecommunications Act of 1934 made it a criminal offense for any- one to make “a telephone call, whether or not conversation ensues, without disclosing his identity and with intent to annoy, abuse, threaten, or harass any person at the called number.” In the world of telephones, the ban posed no threat to free speech, because a telephone call is one-to-one communication. If the person you are talking to doesn’t want to listen, your free speech rights are not infringed. The First Amendment gives you no right to be sure anyone in particular hears you. If your phone call is unwelcome, you can easily find another forum in which to be annoying. The CDA, in a clause that was not struck down along with the display provisions, extended the prohibition to faxes and emails—still, basically, person-to-person communications. But harassing VoIP calls were not criminal under the Telecommunications Act. In an effort to capture all telephone-like technologies under the same regulation, the same clause was extended to all forms of electronic communication, including the vast “electronic library” and “most participatory form of mass speech” that is the Internet.
Defenders of the law assured alarmed bloggers that “annoying” sites would not be prosecuted unless they also were personally threatening, abusive, or harassing. This was an anti-cyberstalking provision, they argued, not a censorship law. Speech protected by the First Amendment would certainly be safe. Online publishers, on the other hand, were reluctant to trust prosecutors’ judgment about where the broadly written statute would be applied. And based on the bizarre and unexpected uses to which the CDA’s Good Samaritan provisions had been put, there was little reason for confidence that the legislative context for the law would restrict its application to one corner of Cyberspace.
The law was challenged by The Suggestion Box, which describes itself as helping people send anonymous emails for reasons such as to “report sensitive information to the media” and to “send crime tips to law enforcement agencies anonymously.” The law, as the complaint argued, might criminalize the sort of employee whistle-blowing that Congress encouraged in the after- math of scandals about corporate accounting practices. The Suggestion Box dropped its challenge when the Government stated that mere annoyances would not be prosecuted, only communications meant “to instill fear in the victim.” So the law is in force, with many left wishing that Congress would be more precise with its language!
Which brings us to the present. The “annoyance” clause of the Violence Against Women Act stands, but only because the Government says that it doesn’t mean what it says. DOPA, with which this chapter began, remains stuck in Congress. Like the CDA and COPA, DOPA has worthy goals. The measures it proposes would, however, probably do more harm than good. In requiring libraries to monitor the computer use of children using sites such as MySpace, it would likely make those sites inaccessible through public libraries, while having little impact on child predators. The congressional sponsors have succumbed to a well-intentioned but misguided urge to control a social problem by restricting the technology that assists it.
Digital Protection, Digital Censorship—and Self-Censorship
The First Amendment’s ban on government censorship complicates government efforts to protect the safety and security of U.S. citizens. Given a choice between protection from personal harm and some fool’s need to spout profanities, most of us would opt for safety. Security is immediate and freedom is long-term, and most people are short-range thinkers. And most people think of security as a personal thing, and gladly leave it to the government to worry about the survival of the nation.
But in the words of one scholar, the bottom line on the First Amendment is that “in a society pledged to self-government, it is never true that, in the long run, the security of the nation is endangered by the freedom of the people.” The Internet censorship bills have passed Congress by wide margins because members of Congress dare not be on record as voting against the safety of their constituents—and especially against the safety of children. Relatively isolated from political pressure, the courts have repeatedly undone speech-restricting legislation passed by elected officials.
Free speech precedes the other freedoms enumerated in the Bill of Rights, but not just numerically. In a sense, it precedes them logically as well. In the words of Supreme Court Justice Benjamin Cardozo, it is “the matrix, the indispensable condition, of nearly every other form of freedom.”
For most governments, the misgivings about censoring electronic information are less profound.
In Saudi Arabia, you can’t get to www.sex.com. In fact, every web access in Saudi Arabia goes through government computers to make sure the URL isn’t on the government’s blacklist. In Thailand, www.stayinvisible.com is blocked; that’s a source of information about Internet privacy and tools to assist in anonymous web surfing. The disparity of information freedom standards between the U.S. and other countries creates conflicts when electronic transactions involve two nations. As discussed in Chapter 4, China insists that Google not help its citizens get information the government does not want them to have. If you try to get to certain web sites from your hotel room in Shanghai, you suddenly lose your Internet connection, with no explanation. You might think there was a glitch in the network somewhere, except that you can reconnect and visit other sites with no problems.
Self-censorship by Internet com- panies is also increasing—the price they pay for doing business in certain countries. Thailand and Turkey blocked the video-sharing site YouTube after it carried clips lampooning (and, as those governments saw it, insulting) their current or former rulers. A Google official described censor- ship as the company’s “No. 1 barrier to trade.” Stirred by the potential costs in lost business and legal battles, Internet companies have become outspoken information libertarians, even as they do what must be done to meet the requirements of foreign governments. Google has even hired a Washington lobbyist to seek help from the U.S. government in its efforts to resist censorship abroad.
INTERNET FREEDOM A great many organizations devote significant effort to maintaining the Internet’s potential as a free marketplace of ideas. In addition to EFF, already mentioned earlier in this chapter, some others include: the Electronic Privacy Information Network, www.epic.org; The Free Expression Network, freeexpression.org, which is actually a coalition; the American Civil Liberties Union, www.aclu.org; and the Chilling Effects Clearinghouse, www.chillingeffects.org. The OpenNet Initiative, opennet.net, monitors Internet censorship around the world. OpenNet’s findings are presented in Access Denied: The Practice and Policy of Global Internet Filtering, by Ronald J. Deibert, John G. Palfrey, Rafal Rohozinski, and Jonathan Zittrain (eds.), MIT Press, 2008. |
It is easy for Americans to shrug their isolationist shoulders over such problems. As long as all the information is available in the U.S., one might reason, who cares what version of Google or YouTube runs in totalitarian regimes abroad? That is for those countries to sort out. But the free flow of information into the U.S. is threatened by the laws of other nations about the operation of the press. Consider the case of Joseph Gutnick and Barron’s magazine.
On October 30, 2000, the financial weekly Barron’s published an article suggesting that Australian businessman Joseph Gutnick was involved in money-laundering and tax evasion. Gutnick sued Dow Jones Co., the publisher of Barron’s, for defamation. The suit was filed in an Australian court. Gutnick maintained that the online edition of the magazine, available in Australia for a fee, was in effect published in Australia. Dow Jones countered that the place of “publication” of the online magazine was New Jersey, where its web servers were located. The suit, it argued, should have been brought in a U.S. court and judged by the standards of U.S. libel law, which are far more favorable to the free speech rights of the press. The Australian court agreed with Gutnick, and the suit went forward. Gutnick ultimately won an apology from Dow Jones and $580,000 in fines and legal costs.
The implications seem staggering. Americans on American soil expect to be able to speak very freely, but the Australian court claimed that the global Internet made Australia’s laws applicable wherever the bits reaching Australian soil may have originated. The Amateur Action conundrum about what community standards apply to the borderless Internet had been translated to the world of global journalism. Will the freedom of the Internet press henceforth be the minimum applying to any of the nations of the earth? Is it possible that a rogue nation could cripple the global Internet press by extorting large sums of money from alleged defamers, or by imposing death sentences on reporters it claimed had insulted their leaders?
The American press tends to fight hard for its right to publish the truth, but the censorship problems reach into Western democracies more insidiously for global corporations not in the news business. It is sometimes easier for American companies to meet the minimum “world” standards of information freedom than to keep different information available in the U.S. There may even be reasons in international law and trade agreements that make such accommodations to censorship more likely. Consider the trials of Yahoo! France.
In May 2000, the League Against Racism and Anti-Semitism (LICRA, in its French acronym) and the Union of French Jewish Students (UEJF) demanded to a French court that Yahoo! stop making Nazi paraphernalia available for online auction, stop showing pictures of Nazi memorabilia, and prohibit the dissemination of anti-Semitic hate speech on discussion groups available in France. Pursuant to the laws of France, where the sale and display of Nazi items is illegal, the court concluded that what Yahoo! was doing was an offense to the “collective memory” of the country and a violation of Article R654 of the Penal Code. It told Yahoo! that the company was a threat to “internal public order” and that it had to make sure no one in France could view such items.
Yahoo! removed the items from the yahoo.fr site ordinarily available in France. LICRA and UEJF then discovered that from within France, they could also get to the American site, yahoo.com, by slightly indirect means. Reaching across the ocean in a manner reminiscent of the Australian court’s defamation action, the French court demanded that the offending items, images, and words be removed from the American web site as well.
Yahoo! resisted for a time, claiming it couldn’t tell where the bits were going—an assertion somewhat lacking in credibility since the company tended to attach French-language advertising to web pages if they were dis- patched to locations in France. Eventually, Yahoo! made a drastic revision of its standards for the U.S. site. Hate speech was prohibited under Yahoo’s revised service terms with its users, and most of the Nazi memorabilia disappeared. But Nazi stamps and coins were still available for auction on the U.S. site, as were copies of Mein Kampf. In November 2000, the French court affirmed and extended its order: Mein Kampf could not be offered for sale in France. The fines were adding up.
Yahoo! sought help in U.S. courts. It had committed no crime in the U.S., it stated. French law could not leap the Atlantic and trump U.S. First Amendment protections. Enforcement of the French order would have a chilling effect on speech in the United States. A U.S. district court agreed, and the decision was upheld on appeal by a three-judge panel of the Court of Appeals for the Ninth Circuit (Northern California).
But in 2006, the full 11-member court of appeals reversed the decision and found against Yahoo!. The company had not suffered enough, according to the majority opinion, nor tried long enough to have the French change their minds, for appeal to First Amendment protections to be appropriate. A dissenting opinion spoke plainly about what the court seemed to be doing. “We should not allow a foreign court order,” wrote Judge William Fletcher, “to be used as leverage to quash constitution- ally protected speech….”
Such conflicts will be more common in the future, as more bits flow across national borders. The laws, trade agreements, and court decisions of the next few years, many of them regulating the flow of “intellectual property,” will shape the world of the future. It would be a sad irony if information liberty, so stoutly defended for centuries in the U.S., would fall in the twenty-first century to a combination of domestic child protection laws and international money-making opportunities. But as one British commentator said when the photo-hosting site Flickr removed photos to conform with orders from Singapore, Germany, Hong Kong, and Korea, “Libertarianism is all very well when you’re a hacker. But business is business.”
Information freedom on the Internet is a tricky business. Technological changes happen faster than legal changes. When a technology shift alarms the populace, legislators respond with overly broad laws. By the time challenges have worked their way through the courts, another cycle of technology changes has happened, and the slow heartbeat of lawmaking pumps out another poorly drafted statute.
The technology of radio and television has also challenged the legislative process, but in a different way. In the broadcast world, strong commercial forces are arrayed in support of speech-restricting laws that have long since outgrown the technology that gave birth to them. We now turn to those changes in the radio world.
CHAPTER 8
Bits in the Air
Old Metaphors, New Technologies, and Free Speech
Censoring the President
On July 17, 2006, U.S. President George Bush and British Prime Minister Tony Blair were chatting at the G-8 summit in St. Petersburg, Russia. The event was a photo opportunity, but the two leaders did not realize that a microphone was on. They were discussing what the UN might do to quell the conflict between Israel and militant forces in Lebanon. “See the irony is,” said Bush, “what they need to do is get Syria to get Hezbollah to stop doing this shit and it’s over.”
The cable network CNN carried the clip in full and posted it on the Web, but most broadcast stations bleeped out the expletive. They were aware of the fines, as much as $325,000, that the Federal Communications Commission might impose for airing the word “shit.”
The FCC had long regulated speech over the public airways, but had raised its decency standards after the 2002 “Golden Globes” awards presentation. Singer Bono had won the “Best Original Song” award. In his acceptance speech, broadcast live on NBC, he said, “This is really, really, fucking brilliant.” The FCC ruled that this remark was “patently offensive under contemporary community standards for the broadcast medium.” It promised to fine and even pull the licenses of stations broadcasting such remarks.
In 2006, the Commission extended the principle from the F-word to the S-word. Nicole Richie, referring to a reality TV show on which she had done some farm work, said to Paris Hilton, “Why do they even call it The Simple Life? Have you ever tried to get cow shit out of a Prada purse? It’s not so fucking simple.” The FCC’s ruling on Richie’s use of the excrement metaphor implied that Bush’s use would be “presumptively profane” in the eyes of the FCC.
A federal court reversed the FCC’s policy against such “fleeting” expletives—an expansion of indecency policies that had been in place for decades. Congress quickly introduced legislation to restore the FCC’s new and strict standard, and the whole matter was to be argued before the U.S. Supreme Court in the spring of 2008. The FCC had adopted its new standards after complaints about broadcast indecency rose from fewer than 50 to about 1.4 million in the period from 2000 to 2004. Congress may have thought that the new speech code reflected a public mandate.
Under the First Amendment, the government is generally not in the speech-restricting business. It can’t force its editorial judgments on news- papers, even to increase the range of information available to readers. The Supreme Court struck down as unconstitutional a Florida law assuring political candidates a simple “right to reply” to newspaper attacks on them.
Nonetheless, in 2006, an agency of the federal government was trying to keep words off television, using rules that “presumptively” covered even a candid conversation about war and peace between leaders of the free world. Dozens of newspapers printed Bush’s remark in full, and anyone with an Internet connection could hear the audio. In spite of the spike in indecency complaints to the FCC, Americans are generally opposed to having the government nanny their television shows.
How Broadcasting Became Regulated
The FCC gained its authority over what is said on radio and TV broadcasts when there were fewer ways to distribute information. The public airways were scarce, went the theory, and the government had to make sure they were used in the public interest. As radio and television became universally accessible, a second rationale emerged for government regulation of broadcast speech. Because the broadcast media have “a uniquely pervasive presence in the lives of all Americans,” as the Supreme Court put it in 1978, the government had a special interest in protecting a defenseless public from objection- able radio and television content.
The explosion in communications technologies has confused both rationales. In the digital era, there are far more ways for bits to reach the consumer, so broadcast radio and television are hardly unique in their pervasiveness.With minimal technology, anyone can sit at home or in Starbucks and choose from among billions of web pages and tens of millions of blogs. Shock jock Howard Stern left broadcast radio for satellite radio, where the FCC has no authority to regulate what he says.
More than 90% of American television viewers get their TV signal through similarly unregulated cable or satellite, not through broadcasts from rooftop antennas. RSS feeds supply up-to-date information to millions of on-the-go cell phoneusers. Radio stations and television channels are today neither scarce nor uniquely pervasive.
For the government to protect children from all offensive information arriving through any communication medium, its authority would have to be expanded greatly and updated continuously. Indeed, federal legislation has been introduced to do exactly that—to extend FCC indecency regulations for broadcast media to satellite and cable television as well.
The explosion in communications raises another possibility, however. If almost anyone can now send information that many people can receive, per- haps the government’s interest in restricting transmissions should be less than what it once was, not greater. In the absence of scarcity, perhaps the government should have no more authority over what gets said on radio and TV than it does over what gets printed in newspapers. In that case, rather than expanding the FCC’s censorship authority, Congress should eliminate it entirely, just as the Supreme Court ended Florida’s regulation of newspaper content.
Parties who already have spots on the radio dial and the TV channel lineup respond that the spectrum—the public airwaves—should remain a limited resource, requiring government protection. No one is making any more radio spectrum, goes the theory, and it needs to be used in the public interest.
But look around you. There are still only a few stations on the AM and FM radio dials. But thousands, maybe tens of thousands, of radio communications are passing through the air around you. Most Americans walk around with two-way radios in their pockets—devices we call cell phones—and most of the nation’s teenagers seem to be talking on them simultaneously. Radios and television sets could be much, much smarter than they now are and could make better use of the airwaves, just as cell phones do.
Engineering developments have vitiated the government’s override of the First Amendment on radio and television. The Constitution demands, under these changed circumstances, that the government stop its verbal policing.
As a scientific argument, the claim that the spectrum is necessarily scarce is now very weak. Yet that view is still forcefully advanced by the very industry that is being regulated. The incumbent license holders—existing broadcast stations and networks—have an incentive to protect their “turf” in the spectrum against any risk, real or imagined, that their signals might be corrupted. By deterring technological innovation, incumbents can limit competition and avoid capital investments. These oddly intertwined strands—the government’s interest in artificial scarcity to justify speech regulation and the incumbents’ interest in artificial scarcity to limit competition and costs—today impair both cultural and technological creativity, to the detriment of society.
To understand the confluent forces that have created the world of today’s radio and television censorship, we have to go back to the inventors of the technology.
SURVIVING ON WIRELESS A dramatic example of the pervasiveness of wireless networks, in spite of the limits on spectrum where they are allowed to operate, was provided in the aftermath of the destruction of the World Trade Center on September 11, 2001.Lower Manhattan communicated for several days largely on the strength of wireless. Something similar happened after the December 2006 earthquake that severed undersea communications cables in southeast Asia. |
From Wireless Telegraph to Wireless Chaos
Red, orange, yellow, green, blue—the colors of the rainbow—are all different and yet are all the same. Any child with a crayon box knows that they are all different. They are the same because they are all the result of electromagnetic radiation striking our eyes. The radiation travels in waves that oscillate very quickly. The only physical difference between red and blue is that red waves oscillate around 450,000,000,000,000 times per second, and blue waves about 50% faster.
Because the spectrum of visible light is continuous, an infinity of colors exists between red and blue. Mixing light of different frequencies creates other colors—for example, half blue waves and half red creates a shade of pink known as magenta, which does not appear in the rainbow.
In the 1860s, British physicist James Clerk Maxwell realized that light consists of electromagnetic waves. His equations predicted that there might be waves of other frequencies—waves that people couldn’t sense. Indeed, such waves have been passing right through us from the beginning of time. They shower down invisibly from the sun and the stars, and they radiate when lightning strikes. No one suspected they existed until Maxwell’s equations said they should. Indeed, there should be a whole spectrum of invisible waves of different frequencies, all traveling at the same great speed as visible light. In 1887, the radio era began with a demonstration by Henrich Hertz. He bent a wire into a circle, leaving a small gap between the two ends. When he set off a big electric spark a few feet away, a tiny spark jumped the gap of the almost-completely-circular wire. The big spark had set off a shower of unseen electromagnetic waves, which had traveled through space and caused electric current to flow in the other wire. The tiny spark was the current completing the circuit. Hertz had created the first antenna, and had revealed the radio waves that struck it. The unit of frequency is named in his honor: One cycle per second is 1 hertz, or Hz for short. A kHz (kilohertz) is a thousand cycles per second, and a MHz (megahertz) is a million cycles per second. These are the units on the AM and FM radio dials.
Gugliemo Marconi was neither a mathematician nor a scientist. He was an inventive tinkerer. Only 13 years old at the time of Hertz’s experiment, Marconi spent the next decade developing, by trial and error, better ways of creating bursts of radio waves, and antennas for detecting them over greater distances.
In 1901, Marconi stood in Newfoundland and received a single Morse code letter transmitted from England. On the strength of this success, the Marconi Wireless Telegraph Company was soon enabling ships to communicate with each other and with the shore. When the Titanic left on its fateful voyage in 1912, it was equipped with Marconi equipment. The main job of the ship’s radio operators was to relay personal messages to and from passengers, but they also received at least 20 warnings from other ships about the icebergs that lay ahead.
The words “Wireless Telegraph” in the name of Marconi’s company suggest the greatest limitation of early radio. The technology was conceived as a device for point-to-point communication. Radio solved the worst problem of telegraphy. No calamity, sabotage, or war could stop wireless transmissions by severing cables. But there was a compensating disadvantage: Anyone could listen in. The enormous power of broadcasting to reach thousands of people at once was at first seen as a liability. Who would pay to send a message to another person when anyone could hear it?
As wireless telegraphy became popular, another problem emerged—one that has shaped the development of radio and television ever since. If several people were transmitting simultaneously in the same geographic area, their signals couldn’t be kept apart. The Titanic disaster demonstrated the confusion that could result. The morning after the ship hit the iceberg, American newspapers reported excitedly that all passengers had been saved and the ship was being towed to shore. The mistake resulted from a radio operator’s garbled merger of two unrelated segments of Morse code. One ship inquired if “all Titanic passengers safe?” A completely different ship reported that it was “300 miles west of the Titanic and towing an oil tank to Halifax.” All the ships had radios and radio operators. But there were no rules or conventions about whether, how, or when to use them.
Listeners to Marconi’s early transmitters were easily confused because they had no way to “tune in” a particular communication. For all of Marconi’s genius in extending the range of transmission, he was using essentially Hertz’s method for generating radio waves: big sparks. The sparks splattered electromagnetic energy across the radio spectrum. The energy could be stopped and started to turn it into dots and dashes, but there was nothing else to control. One radio operator’s noise was like any other’s. When several transmitted simultaneously, chaos resulted.
The many colors of visible light look white if all blended together. A color filter lets through some frequencies of visible light but not others. If you look at the world through a red filter, everything is a lighter or darker shade of red, because only the red light comes through. What radio needed was something similar for the radio spectrum: a way to produce radio waves of a single frequency, or at least a narrow range of frequencies, and a receiver that could let through those frequencies and screen out the rest. Indeed, that technology already existed.
In 1907, Lee De Forest patented a key technology for the De Forest Radio Telephone Company—dedicated to sending voice and even music over the radio waves. When he broadcast Enrico Caruso from the Metropolitan Opera House on January 13, 1910, the singing reached ships at sea. Amateurs huddled over receivers in New York and New Jersey. The effect was sensational. Hundreds of amateur broadcasters sprang into action over the next few years, eagerly saying whatever they wanted, and playing whatever music they could, to anyone who happened to be listening.
But with no clear understanding about what frequencies to use, radio communication was a hit-or-miss affair. Even what the New York Times described as the “homeless song waves” of the Caruso broadcast clashed with another station that, “despite all entreaties,” insisted on broadcasting at the identical 350kHz frequency. Some people could “catch the ecstasy” of Caruso’s voice, but others got only some annoying Morse code: “I took a beer just now.”
Radio Waves in Their Channels
The emerging radio industry could not grow under such conditions. Commercial interests complemented the concerns of the U.S. Navy about amateur interference with its ship communications. The Titanic disaster, although it owed little to the failures of radio, catalyzed government action. On May 12, 1912, William Alden Smith called for radio regulation on the floor of the U.S. Senate. “When the world weeps together over a common loss…,” proclaimed the Senator, “why should not the nations clear the sea of its conflicting idioms and wisely regulate this new servant of humanity?”
HIGH FREQUENCIES Over the years, technological improvements have made it possible to use higher and higher frequencies. Early TV was broadcast at what were then considered “Very High Frequencies” (VHF) because they were higher than AM radio. Technology improved again, and more stations appeared at “Ultra High Frequencies” (UHF). The high- est frequency in commercial use today is 77GHz—77 gigahertz, that is, 77,000MHz. In general, high frequency signals fade with distance more than low signals, and are therefore mainly useful for localized or urban environments. Short waves correspond to high frequencies because all radio waves travel at the same speed, which is the speed of light. |
The Radio Act of 1912 limited broadcasting to license holders. Radio licenses were to be “granted by the Secretary of Commerce and Labor upon application therefor.” In granting the license, the Secretary would stipulate the frequencies “authorized for use by the station for the prevention of interference and the hours for which the station is licensed for work.” The Act reserved for government use the choice frequencies between about 200 and 500kHz, which permitted the clearest communications over long distances. Amateurs were pushed off to “short wave” frequencies above 1500kHz, considered useless for technological reasons. The frequency 1000kHz was reserved for distress calls, and licensed stations were required to listen to it every 15 minutes (the one provision that might have helped the Titanic, since the radio operators of a nearby ship had gone off-duty and missed the Titanic’s rescue pleas). The rest of the spectrum the Secretary could assign to commercial radio stations and private businesses. Emphasizing the nature of radio as “wireless telegraphy,” the Act made it a crime for anyone hearing a radio message to divulge it to anyone except its intended recipient.
Much has changed since 1912. The uses of radio waves have become more varied, the allocation of spectrum blocks has changed, and the range of usable frequencies has grown. The current spectrum allocation picture has grown into a dense, disorganized quilt, the product of decades of Solomonic FCC judgments (see Figure 8.1). But still, the U.S. government stipulates what parts of the spectrum can be used for what purposes. It prevents users from interfering with each other and with government communications by demanding that they broadcast at limited power and only at their assigned frequencies. As long as there weren’t many radio stations, the implied promise in the Act of 1912 that licenses would be granted “upon application therefor” caused no problems. With the gossip of the pesky amateurs pushed into remote radio territory, there was plenty of spectrum for commercial, military, and safety use.
Within a decade, that picture had changed dramatically. On November 2, 1920, a Detroit station broadcast the election of Warren Harding as President of the United States, relaying to its tiny radio audience the returns it was receiving by telegraph. Radio was no longer just point-to-point communication. A year later, a New York station broadcast the World Series between the Giants and the Yankees, pitch by pitch. Sports broadcasting was born with a broadcaster drearily repeating the ball and strike information telephoned by a newspaper reporter at the ballpark.
Public understanding of the possibilities grew rapidly. The first five radio stations were licensed for broadcasting in 1921. Within a year, there were 670. The number of radio receivers jumped in a year from less than 50,000 to more than 600,000, perhaps a million. Stations using the same frequency in the same city divided up the hours of the day. As radio broadcasting became a profitable business, the growth could not go on forever.
On November 12, 1921, the New York City broadcast license of Intercity Radio Co. expired. Herbert Hoover, then the Secretary of Commerce, refused to renew it, on the grounds that there was no frequency on which Intercity could broadcast in the city’s airspace without interfering with government or other private stations. Intercity sued Hoover to have its license restored, and won. Hoover, said the court, could choose the frequency, but he had no discretion to deny the license. As the congressional committee proposing the 1912 Radio Act had put it, the licensing system was “substantially the same as that in use for the documenting upward of 25,000 merchant vessels.” The implied metaphor was that Hoover should keep track of the stations like ships in the ocean. He could tell them what shipping lanes to use, but he couldn’t keep them out of the water.
The radio industry begged for order. Hoover convened a National Radio Conference in 1922 in an attempt to achieve consensus on new regulations before chaos set in. The spectrum was “a great national asset,” he said, and “it becomes of primary public interest to say who is to do the broadcasting, under what circumstances, and with what type of material.” “[T]he large mass of subscribers need protection as to the noises which fill their instruments,” and the airwaves need “a policeman” to detect “hogs that are endangering the traffic.”
Hoover divided the spectrum from 550kHz to 1350kHz in 10kHz bands— called “channels,” consistent with the nautical metaphor—to squeeze in more stations. Empty “guard bands” were left on each side of allocated bands because broadcast signals inevitably spread out, reducing the amount of usable spectrum. Persuasion and voluntary compliance helped Hoover limit interference. As stations became established, they found it advantageous to comply with Hoover’s prescriptions. Start-ups had a harder time breaking in. Hoover convinced representatives of a religious group that to warn of the coming apocalypse, they should buy time on existing stations rather than build one of their own. After all, their money would go farther that way—in six months, after the world had ended, they would have no further use for a transmitter. Hoover’s effectiveness made Congress complacent—the system was working well enough without laws.
But as the slicing got finer, the troubles got worse. WLW and WMH in Cincinnati broadcast on the same frequency in 1924 until Hoover brokered a deal for three stations to share two frequencies in rotating time slots. Finally, the system broke down. In 1925, Zenith Radio Corporation was granted a license to use 930kHz in Chicago, but only on Thursday nights, only from 10 p.m. to midnight, and only if a Denver station didn’t wish to broadcast then. Without permission, Zenith started broadcasting at 910kHz, a frequency that was more open because it had been ceded by treaty to Canada. Hoover fined Zenith; Zenith challenged Hoover’s authority to regulate frequencies, and won in court. The Secretary then got even worse news from the U.S. Attorney General: The 1912 Act, drafted before broadcasting was even a concept, was so ambiguous that it probably gave Hoover no authority to regulate anything about broadcast radio—frequency, power, or time of day.
Hoover threw up his hands. Anyone could start a station and choose a frequency—there were 600 applications pending—but in doing so, they were “proceeding entirely at their own risk.” The result was the “chaos in the air” that Hoover had predicted. It was worse than before the 1912 Act because so many more transmitters existed and they were so much more powerful. Stations popped up, jumped all over the frequency spectrum in search of open air, and turned up their transmission power to the maximum to drown out competing signals. Radio became virtually useless, especially in cities. Congress finally was forced to act.
The Spectrum Nationalized
The premises of the Radio Act of 1927 are still in force. The spectrum has been treated as a scarce national resource ever since, managed by the government.
The purpose of the Act was to maintain the control of the United States over all the channels of … radio transmission; and to provide for the use of such channels, but not the ownership thereof, by individuals, firms, or corporations, for limited periods of time, under licenses granted by Federal authority…. The public could use the spectrum, under conditions stipulated by the government, but could not own it.
A new authority, the Federal Radio Commission (FRC), made licensing decisions. The public had a qualified expectation that license requests would be granted: The licensing authority, if public convenience, interest, or necessity will be served thereby, … shall grant to any applicant therefor a station license…. The Act recognized that demand for licenses could exceed the supply of spectrum. In case of competition among applicants, the licensing authority shall make such a distribution of licenses, bands of frequency…, periods of time for operation, and of power among the different States and communities as to give fair, efficient, and equitable radio service to each….
THE “RADIO COMMISSION” GROWS In 1934, the FRC’s name was changed to the Federal Communications Commission—the FCC— when telephone and telegraph regulation came under the Commission’s oversight. When a separate chunk of radio spectrum was allocated for television, the FCC assumed authority over video broadcasts as well. |
The language about “public convenience, interest, or necessity” echoes Hoover’s 1922 speech about a “national asset” and the “public interest.” It is also no accident that this law was drafted as the Teapot Dome Scandal was cresting. Oil reserves on federal land in Wyoming had been leased to Sinclair Oil in 1923 with the assistance of bribes paid to the Secretary of the Interior. It took several years for Congressional investigations and federal court cases to expose the wrongdoing; the Secretary was eventually imprisoned. By early 1927, the fair use of national resources in the public interest was a major concern in the United States.
With the passage of the Act of 1927, the radio spectrum became federal land. International treaties followed, to limit interference near national borders. But within the U.S., just as Hoover had asked five years earlier, the federal government took control over who would be allowed to broadcast, which radio waves they could use—and even what they could say.
Goat Glands and the First Amendment
The Radio Act of 1927 stipulated that the FRC could not abridge free speech over the radio. Nothing in this Act shall be understood or construed to give the licensing authority the power of censorship…, and no regulation or condition … shall interfere with the right of free speech by means of radio communications. Inevitably, a case would arise exposing the implicit conflict: On the one hand, the Commission had to use a public interest standard when granting and renewing licenses. On the other, it had to avoid censorship. The pivotal case was over the license for KFKB radio, the station of the Kansas goat-gland doctor, John Romulus Brinkley (see Figure 8.2). The wrath brought down on CBS in 2004 for showing a flash of Janet Jackson’s breast— and which the networks feared if they broadcast Saving Private Ryan on Veterans’ Day or President Bush muttering to Tony Blair—descends from the FCC’s action against this classic American charlatan.
Brinkley, born in 1885, became a “doctor” licensed to practice in Kansas by buying a degree from the Eclectic Medical University in Kansas City. He worked briefly as a medic for Swift & Co., the meatpackers. In 1917, he set up his medical practice in Milford, a tiny town about 70 miles west of Topeka. One day, a man came for advice about his failing virility, describing himself as a “flat tire.” Drawing on his memory of goat behavior from his days at the slaughterhouse, Brinkley said, “You wouldn’t have any trouble if you had a pair of those buck glands in you.” “Well, why don’t you put ‘em in?” the patient asked. Brinkley did the transplant in a back room, and a business was born. Soon he was performing 50 transplants a month, at $750 per surgery. In time, he discovered that promising sexual performance was even more lucrative than promising fertility.
As a young man, Brinkley had worked at a telegraph office, so he knew the promise of communication technology. In 1923, he opened Kansas’s first radio station, KFKB—“Kansas First, Kansas Best” radio, or sometimes “Kansas Folks Know Best.” The station broadcast a mixture of country music, fundamentalist preaching, and medical advice from Dr. Brinkley himself. Listeners sent in their complaints, and the advice was almost always to buy some of Dr. Brinkley’s mail-order patent medicines. “Here’s one from Tillie,” went a typical segment. “She says she had an operation, had some trouble 10 years ago. I think the operation was unnecessary, and it isn’t very good sense to have an ovary removed with the expectation of motherhood resulting there- from. My advice to you is to use Women’s Tonic No. 50, 67, and 61. This com- bination will do for you what you desire if any combination will, after three months persistent use.”
KFKB had a massively powerful transmitter, heard halfway across the Atlantic. In a national poll, it was the most popular station in America—with four times as many votes as the runner-up. Brinkley was receiving 3,000 letters a day and was a sensation throughout the plains states. On a good day, 500 people might show up in Milford. But the American Medical Association—prompted by a competing local radio station—objected to his quackery. The FRC concluded that “public interest, convenience, or necessity” would not be served by renewing the license. Brinkley objected that the cancellation was nothing less than censorship.
An appeals court sided with the FRC in a landmark decision. Censorship, the court explained, was prior restraint, which was not at issue in Brinkley’s case. The FRC had “merely exercised its undoubted right to take note of appellant’s past conduct.” An arguable point—as Albert Gallatin said more than 200 years ago about prior restraint of the press, it was “preposterous to say, that to punish a certain act was not an abridgment of the liberty of doing that act.”
The court used the public land metaphor in justifying the FRC’s action. “[B]ecause the number of available broadcasting frequencies is limited,” wrote the court, “the commission is necessarily called upon to consider the character and quality of the service to be rendered…. Obviously, there is no room in the broadcast band for every business or school of thought.”
“Necessarily” and “obviously.” It is always wise to scrutinize arguments that proclaim loudly how self-evident they are. Judge Felix Frankfurter, in an opinion on a different case in 1943, restated the principle in a form that has often been quoted. “The plight into which radio fell prior to 1927 was attributable to certain basic facts about radio as a means of communication—its facilities are limited; they are not available to all who may wish to use them; the radio spectrum simply is not large enough to accommodate everybody. There is a fixed natural limitation upon the number of stations that can operate without interfering with one another.”
These were facts of the technology of the time. They were true, but they were contingent truths of engineering. They were never universal laws of physics, and are no longer limitations of technology. Because of engineering innovations over the past 20 years, there is no practically significant “natural limitation” on the number of broadcast stations. Arguments from inevitable scarcity can no longer justify U.S. government denials of the use of the airwaves.
The vast regulatory infrastructure, built to rationalize use of the spectrum by much more limited radio technology, has adjusted slowly—as it almost inevitably must: Bureaucracies don’t move as quickly as technological innovators. The FCC tries to anticipate resource needs centrally and far in advance. But technology can cause abrupt changes in supply, and market forces can cause abrupt changes in demand. Central planning works no better for the FCC than it did for the Soviet Union.
Moreover, plenty of stakeholders in old technology are happy to see the rules remain unchanged. Like tenants enjoying leases on public land, incumbent radio license holders have no reason to encourage competing uses of the assets they control. The more money that is at stake, the greater the leverage of the profitable ventures. Radio licenses had value almost from the beginning, and as scarcity increased, so did price. By 1925, a Chicago license was sold for $50,000. As advertising expanded and stations bonded into networks, transactions reached seven figures. After the 1927 Act, disputes between stations had to be settled by litigation, trips to Washington, and pressure by friendly Congressional representatives—all more feasible for stations with deep pockets. At first, there were many university stations, but the FRC squeezed them as the value of the airwaves went up. As non-profits, these stations could not hold their ground. Eventually, most educational stations sold out to commercial broadcasters. De facto, as one historian put it, “while talking in terms of the public interest, … the commission actually chose to further the ends of the commercial broadcasters.”
The Path to Spectrum Deregulation
When you push a button on your key fob and unlock your car doors, you are a radio broadcaster. The signal from the key fob uses a bit of the spectrum. The key fob signal obeys the same basic physical laws as WBZ’s radio broad- casts in Boston, which have been going on continuously since WBZ became the first Eastern commercial station in 1921. But the new radio broadcasts are different in two critical respects. There are hundreds of millions of them going on every day. And while WBZ’s broadcast power is 50,000 watts, a key fob’s is less than .0002 of a watt.
If the government still had to license every radio transmitter—as Congress authorized in the aftermath of the radio chaos of the 1920s—neither radio key fobs nor any of hundreds of other innovative uses of low-power radio could have come about. The law and the bureaucracy it created would have snuffed this part of the digital explosion.
Another development also lay behind the wireless explosion. Technology had to change so that the available spectrum could be used more efficiently. Digitalization and miniaturization changed the communications world. The story of cell phones and wireless Internet and many conveniences as yet unimagined is a knot of politics, technology, and law. You can’t understand the knot without understanding the strands, but in the future, the strands need not remain tied up in the same way as they are today.
From a Few Bullhorns to Millions of Whispers
Thirty years ago, there were no cell phones. A handful of business executives had mobile phones, but the devices were bulky and costly. Miniaturization helped change the mobile phone from the perk of a few corporate bigwigs into the birthright of every American teenager. But the main advance was in spectrum allocation—in rethinking the way the radio spectrum was used.
In the era of big, clunky mobile phones, the radio phone company had a big antenna and secured from the FCC the right to use a few frequencies in an urban area. The executive’s phone was a little radio station, which broad- cast its call. The mobile phone had to be powerful enough to reach the company’s antenna, wherever in the city the phone might be located. The number of simultaneous calls was limited to the number of frequencies allocated to the company. The technology was the same as broadcast radio stations used, except that the mobile phone radios were two-way. The scarcity of spectrum, still cited today in limiting the number of broadcast channels, then limited the number of mobile phones. Hoover understood this way back in 1922. “Obviously,” he said, “if 10,000,000 telephone subscribers are crying through the air for their mates … the ether will be filled with frantic chaos, with no communication of any kind possible.”
Cellular technology exploits Moore’s Law. Phones have become faster, cheaper, and smaller. Because cell phone towers are only a mile or so apart, cell phones need only be powerful enough to send their signals less than a mile. Once received by an antenna, the signal is sent on to the cell phone company by “wireline”—i.e., by copper or fiber optic cables on poles or under- ground. There need be only enough radio spectrum to handle the calls within the “cell” surrounding a tower, since the same frequencies can be used simultaneously to handle calls in other cells. A lot of fancy dancing has to be done to prevent a call from being dropped as an active phone is carried from cell to cell, but computers, including the little computers inside cell phones, are smart and fast enough to keep up with such rearrangements.
Cell phone technology illustrates an important change in the use of radio spectrum. Most radio communications are now over short distances. They are transmissions between cell phone towers and cell phones. Between wireless routers at Starbucks and the computers of coffee drinkers. Between cordless telephone handsets and their bases. Between highway toll booths and the transponders mounted on commuters’ windshields. Between key fobs with buttons and the cars they unlock. Between Wii remotes and Wii game machines. Between iPod transmitters plugged into cars’ cigarette lighters and the cars’ FM radios.
Even “satellite radio” transmissions often go from a nearby antenna to a customer’s receiver, not directly from a satellite orbiting in outer space. In urban areas, so many buildings lie between the receiver and the satellite that the radio companies have installed “repeaters”—antennas connected to each other by wireline. When you listen to XM or Sirius in your car driving around a city, the signal is probably coming to you from an antenna a few blocks away.
The radio spectrum is no longer mainly for long-range signaling. Spectrum policies were set when the major use of radio was for ship-to-shore transmissions, SOS signaling from great distances, and broadcasting over huge geographic areas. As the nation has become wired, most radio signals travel only a few feet or a few hundred feet. Under these changed conditions, the old rules for spectrum management don’t make sense.
Can We Just Divide the Property Differently?
Some innovations make better use of the spectrum without changing the fundamental allocation picture shown in Figure 8.1. For example, HD radio squeezes an unrelated low-power digital transmission alongside the analog AM and FM radio channels. (“HD” is a trademark. It doesn’t stand for “high definition.”) On AM HD radio, the HD transmission uses the guard bands on either side of an AM station for entirely different broadcast content (see Figure 8.3). Most AM radios filter out any signal in the channels adjacent to the one to which it is tuned, so the HD transmission is inaudible on an ordinary radio, even as noise. The HD radio broadcast can be heard only on a special radio designed to pick up and decode the digital transmission.
HD radio is a clever invention, and by opening the spectrum to HD broad- casts, the FCC has been able to squeeze in more broadcast stations—at least for those willing to buy special radios. But it doesn’t challenge the fundamental model that has been with us since the 1920s: Split up the spectrum and give a piece to each licensee.
Even parts of the spectrum that are “allocated” to licensees may be drastically underused in practice. A 2002 Federal Communications Committee Report puts it this way: “… the shortage of spectrum is often a spectrum access problem. That is, the spectrum resource is available, but its use is compartmented by traditional policies based on traditional technologies.” The committee came to this conclusion in part by listening to the air waves in various frequency blocks to test how often nothing at all was being transmitted. Most of the time, even in the dense urban settings of San Diego, Atlanta, and Chicago, important spectrum bands were nearly 100% idle. The public would be better served if others could use the otherwise idle spectrum.
For about ten years, the FCC has experimented with “secondary spectrum marketing.” Someone wanting some spectrum for temporary use may be able to lease it from a party who has a right to use it, but is willing to give it up in exchange for a payment. A university radio station, for example, may need the capacity to broadcast at high power only on a few Saturday afternoons to cover major football games. Perhaps such a station could make a deal with a business station that doesn’t have a lot of use for its piece of the spectrum when the stock markets are closed. As another example, instead of reserving a band exclusively for emergency broadcasts, it could be made available to others, with the understanding—enforced by codes wired into the transmitters—that the frequency would be yielded on demand for public safety broadcasts.
As the example of eBay has shown, computerized auctions can result in very efficient distribution of goods. The use of particular pieces of the spectrum—at particular times, and in particular geographic areas—can create efficiencies if licensees of under-utilized spectrum bands had an incentive to sell some of their time to other parties.
But secondary markets don’t change the basic model—a frequency band belongs to one party at a time. Such auction ideas change the allocation scheme. Rather than having a government agency license spectrum statically to a single party with exclusive rights, several parties can divide it up and make trades. But these schemes retain the fundamental notion that spectrum is like land to be split up among those who want to use it.
Sharing the Spectrum
In his 1943 opinion, Justice Frankfurter used an analogy that unintentionally pointed toward another way of thinking. Spectrum was inevitably scarce, he opined. “Regulation of radio was therefore as vital to its development as traffic control was to the development of the automobile.”
Just as the spectrum is said to be, the roadways are a national asset. They are controlled by federal, state, and local governments, which set rules for their use. You can’t drive too fast. Your vehicle can’t exceed height and weight limits, which may depend on the road.
But everyone shares the roads. There aren’t any special highways reserved for government vehicles. Trucking companies can’t get licenses to use particular roads and keep out their competitors. Everybody shares the capacity of the roads to carry traffic.
The roads are what is known in law as a “commons” (a notion introduced in Chapter 6). The ocean is also a commons, a shared resource subject to international fishing agreements. In theory at least, the ocean need not be a commons. Fishing boats could have exclusive fishing rights in separate sectors of the ocean’s surface. If the regions were large enough, fishermen might be able to earn a good living under these conditions. But such an allocation of the resources of the ocean would be dreadfully inefficient for society as a whole. The oceans better satisfy human needs if they are treated as a commons and fishing boats move with the fish—under agreed limits about the intensity of fishing.
The spectrum can be shared rather than split up into pieces. There is a precedent in electronic communications. The Internet is a digital commons. Everyone’s packets get mixed with everyone else’s on the fiber optics and satellite links of the Internet backbone. The packets are coded. Which packet belongs to whom is sorted out at the ends. Anything confidential can be encrypted.
Something similar can be done with broadcasts—provided there is a basic rethinking of spectrum management. Two ideas are key: first, that using lots of bandwidth need not cause interference and can greatly increase transmission capacity; and second, that putting computers into radio receivers can greatly improve the utilization of the spectrum.
The Most Beautiful Inventor in the World
Spread spectrum was discovered and forgotten several times and in several countries. Corporations (ITT, Sylvania, and Magnavox), universities (especially MIT), and government laboratories doing classified research all shared in giving birth to this key component of modern telecommunications—and were often unaware of each other’s activities.
By far the most remarkable precedent for spread spectrum was a patented invention by Hollywood actress Hedy Lamarr—“the most beautiful woman in the world,” in the words of movie mogul Louis Mayer—and George Antheil, an avant-garde composer known as “the bad boy of music.”
Lamarr made a scandalous name for herself in Europe by appearing nude in 1933, at the age of 19, in a Czech movie, Ecstasy. She became the trophy wife of Fritz Mandl, an Austrian munitions maker whose clients included both Hitler and Mussolini. In 1937, she disguised herself as a maid and escaped Mandl’s house, fleeing first to Paris and then to London. There she met Mayer, who brought her to Hollywood. She became a star—and the iconic beauty of her screen generation (see Figure 8.4).
In 1940, Lamarr arranged to meet Antheil. Her upper torso could use some enhancement, she thought, and she hoped Antheil could give her some advice. Antheil was a self-styled expert on female endocrinology, and had written a series of articles for Esquire magazine with titles such as “The Glandbook for the Questing Male.” Antheil suggested glandular extracts. Their conversation then turned to other matters—specifically, to torpedo warfare.
A torpedo—just a bomb with a propeller—could sink a massive ship. Radio controlled torpedoes had been developed by the end of World War I, but were far from foolproof. An effective countermeasure was to jam the signal con- trolling the torpedo by broadcasting loud radio noise at the frequency of the control signal. The torpedo would go haywire and likely miss its target. From observing Mandl’s business, Lamarr had learned about torpedoes and why it was hard to control them.
Lamarr had become fiercely pro-American and wished to help the Allied war effort. She conceived the idea of transmitting the torpedo control signal in short bursts at different frequencies. The code for the sequence of frequencies would be held identically within the torpedo and the controlling ship. Because the sequence would be unknown to the enemy, the transmission could not be jammed by flooding the airwaves with noise in any limited frequency band. Too much power would be required to jam all possible frequencies simultaneously.
Antheil’s contribution was to control the frequency-hopping sequence by means of a player piano mechanism—with which he was familiar because he had scored his masterpiece, Ballet Mécanique, for synchronized player pianos. As he and Lamarr conceived the device (it was never built), the signal would therefore hop among 88 frequencies, like the 88 keys on a piano keyboard. The ship and the torpedo would have identical piano rolls—in effect, encrypting the broadcast signal.
The story of Antheil and Lamarr, and the place of their invention in the history of spread spectrum, is told in Spread Spectrum by Rob Walters (Booksurge LLC, 2005). |
In 1941, Lamarr and Antheil assigned their patent (see Figure 8.5) to the Navy. A small item on the “Amusements” page of the New York Times quoted an army engineer as describing their invention as so “red hot” that he could not say what it did, except that it was “related to the remote control of apparatus employed in warfare.” Nonetheless, the Navy seems to have done nothing with the invention at the time. Instead, Lamarr went to work selling war bonds. Calling herself “just a plain gold-digger for Uncle Sam,” she sold kisses, and once raised $4.5 million at a single lunch.The patent was ignored for more than a decade. Romuald Ireneus ’Scibor-Marchocki, who was an engineer for a Naval contractor in the mid-1950s, recalls being given a copy when he was put to work on a device for locating enemy submarines. He didn’t recognize the patentee because she had not used her stage name.
And that, in a nutshell, is the strange story of serendipity, teamwork, vanity, and patriotism that led to the Lamarr-Antheil discovery of spread spectrum. The connection of these two to the discovery of spread spectrum was made only in the 1990s. By that time, the influence of their work had become entangled with various lines of classified military research. Whether Hedy Lamarr was more a Leif Erikson than a Christopher Columbus of this new conceptual territory, she was surely the most unlikely of its discoverers. In 1997, the Electronic Frontier Foundation honored her for her discovery; she welcomed the award by saying, “It’s about time.” When asked about her dual achievements, she commented, “Films have a certain place in a certain time period. Technology is forever.”
Channel Capacity
Lamarr and Antheil had stumbled on a particular way of exploiting a broad frequency range—“spreading” signals across the spectrum. The theoretical foundation for spread spectrum was one of the remarkable mathematical results of Claude Shannon in the late 1940s. Although no digital telephones or radios existed at the time, Shannon derived many of the basic laws by which they would have to operate. The Shannon-Hartley Theorem predicted spread spectrum in the same way that Maxwell’s equations predicted radio waves.
Shannon’s result (building on work by Ralph Hartley two decades earlier) implies that “interference” is not the right concept for thinking about how much information can be carried in the radio spectrum. Signals can overlap in frequency and yet be pulled apart perfectly by sufficiently sophisticated radio receivers.
Early engineers assumed that communication errors were inevitable. Send bits down a wire, or through space using radio waves, and some of them would probably arrive incorrectly, because of noise. You could make the channel more reliable by slowing the transmission, they supposed, in the same way that people talk more slowly when they want to be sure that others understand them—but you could never guarantee that a communication was errorless.
Shannon showed that communication channels actually behave quite differently. Any communication channel has a certain channel capacity—a number of bits per second that it can handle. If your Internet connection is advertised as having a bit rate of 3Mbit/sec (3 million bits per second), that number is the channel capacity of the particular connection between you and your Internet Service Provider (or should be—not all advertisements tell the truth). If the connection is over telephone wiring and you switch to a service that runs over fiber optic cables, the channel capacity should increase.
However large it is, the channel capacity has a remarkable property, which Shannon proved: Bits can be transmitted through the channel, from the source to the destination, with negligible probability of error as long as the transmission rate does not exceed the channel capacity. Any attempt to push bits down the channel at a rate higher than the channel capacity will inevitably result in data loss. With sufficient cleverness about the way data from the source is encoded before it is put in the channel, the error rate can be essentially zero, as long as the channel capacity is not exceeded. Only if the data rate exceeds the channel capacity do transmission errors become inevitable.
ERRORS AND DELAYS Although transmission errors can be made unlikely, they are never impossible. However, errors can be made far less probable than, for example, the death of the intended recipient in an earthquake that just happens to occur while the bits are on their way (see the Appendix). Guaranteeing correctness requires adding redundant bits to the message—in the same way that fragile postal shipments are protected by adding styrofoam packing material. Attaining data rates close to the “Shannon limit” involves pre-processing the bits. That may increase latency—the time delay between the start of the “packing” process and the insertion of bits into the channel. Latency can be a problem in applications such as voice communication, where delays annoy the communicants. Happily, phone calls don’t require error-free transmission—we are all used to putting up with a little bit of static. |
Power, Signal, Noise, and Bandwidth
The capacity of a radio channel depends on the frequencies at which messages are transmitted and the amount of power used to transmit them. It’s helpful to think about these two factors separately.
BANDWIDTH Because channel capacity depends on frequency bandwidth, the term “bandwidth” is used informally to mean “amount of information communicated per second.” But technically, bandwidth is a term about electromagnetic communication, and even then is only one of the factors affecting the capacity to carry bits. |
A radio broadcast is never “at” a single frequency. It always uses a range or band of frequencies to convey the actual sounds. The only sound that could be carried at a single, pure frequency would be an unvarying tone. The bandwidth of a broadcast is the size of the frequency band—that is, the difference between the top frequency and the bottom frequency of the band. Hoover, to use this language, allotted 10kHz of bandwidth for each AM station.
If you can transmit so many bits per second with a certain amount of bandwidth, you can transmit twice that many bits per second if you have twice as much bandwidth. The two trans- missions could simply go on side by side, not interacting with each other in any way. So, channel capacity is proportional to bandwidth.
The relation to signal power is more surprising. To use simple numbers for clarity, suppose you can transmit one bit, either 0 or 1, in one second. If you could use more power but no more time or bandwidth, how many bits could you transmit?
One way a radio transmission might distinguish between 0 and 1 is for the signals representing these two values to have different signal powers. To continue to oversimplify, assume that zero power represents 0, and a little more power, say 1 watt, represents 1. Then to distinguish a 1 from a 0, the radio receiver has to be sensitive enough to tell the difference between 1 watt and 0 watts. The uncontrollable noise—radio waves arriving from sunspots, for example—also must be weak enough that it does not distort a signal representing 0 so that it is mistaken for a signal representing 1.
Under these conditions, four times as much power would enable transmission of two bits at once, still in one second. Power level 0 could represent 00; 1 watt, 01; 2 watts, 10; and 3 watts could represent 11. Successive power levels have to be separated by at least a watt to be sure that one signal is not confused with another. If the power levels were closer together, the unchanged noise might make them impossible to distinguish reliably. To transmit three bits at a time, you’d need eight times as much power, using levels 0 through 7 watts—that is, the amount of power needed increases exponentially with the number of bits to be transmitted at once (see Figure 8.6).
So the Shannon-Hartley result says that channel capacity depends on both bandwidth and signal power, but more bandwidth is exponentially more valuable than more signal power. You’d have to get more than a thousand times more signal power to get the same increase in channel capacity as you could get from having just ten times more bandwidth (because 1024 = 210). Bandwidth is precious indeed.
One Man’s Signal Is Another Man’s Noise
The consequences of the Shannon-Hartley result about the value of band- width are quite astonishing. If WBZ were transmitting digitally with its 50,000 watt transmitter, it could transmit the same amount of information (over shorter distances) using less power than a household light bulb—if it could get 100kHz of bandwidth rather than the 10kHz the FCC has allowed it.
Of course, no station could get exclusive use of 100kHz. Even giving each station 10kHz uses up the spectrum too quickly. The spectrum-spreading idea works only if the spectrum is regarded as a commons. And to see the consequences of many signals broadcasting in the same spectrum, one more crucial insight is needed.
The power level that affects the capacity of a radio channel is not actually the signal power, but the ratio of the signal power to the noise power—the so-called signal-to-noise ratio. In other words, you could transmit at the same bit rate with one watt of power as with ten—if you could also reduce the noise by a factor of ten. And “noise” includes other people’s signals. It really doesn’t matter whether the interference is coming from other human broad- casts or from distant stars. All the interfering broadcasts can share the same spectrum band, to the extent they could coexist with the equivalent amount of noise.
A readable account of spread spectrum radio appeared in 1998: “Spread-Spectrum Radio” by David R. Hughes and DeWayne Hendricks (Scientific American, April 1998, 94–96). |
A surprising consequence of Shannon-Hartley is that there is some channel capacity even if the noise (including other people’s signals) is stronger than the signal. Think of a noisy party: You can pick out a conversation from the back- ground noise if you focus on a single voice, even if it is fainter than the rest of the noise. But the Shannon-Hartley result predicts even more: The channel can transmit bits flawlessly, if slowly, even if the noise is many times more powerful than the signal. And if you could get a lot of bandwidth, you could drastically reduce the signal power without lowering the bit rate at all (see Figure 8.7). What would seem to be just noise to anyone listening casually on a particular frequency would actually have a useful signal embedded within it.
The Shannon-Hartley Theorem is a mathematician’s delight—a tease that limits what is possible in theory and gives no advice about how to achieve it in practice. It is like Einstein’s E = mc2 —which at once says nothing, and everything, about nuclear reactors and atomic bombs. Hedy Lamarr’s frequency hopping was one of the spread spectrum techniques that would eventually be practical, but other ingenious inventions, named by odd acronyms, would emerge in the late twentieth century.
Two major obstacles stood between the Shannon-Hartley result and usable spread spectrum devices. The first was engineering: computers had to become fast, powerful, and cheap enough to process bits for transmission of high- quality audio and video to consumers. That wouldn’t happen until the 1980s. The other problem was regulatory. Here the problem was not mathematical or scientific. Bureaucracies change more slowly than the technologies they regulate.
Spectrum Deregulated
Today, every Starbucks has WiFi—that is, wireless Internet access. Hotel rooms, college dormitories, and a great many households also have “wireless.” This happened because a tiny piece of the spectrum, a slice less than a millimeter wide in Figure 8.1, was deregulated and released for experimental use by creative engineers. It is an example of how deregulation can stimulate industrial innovations, and about how existing spectrum owners prefer a regulatory climate that maintains their privileged position. It is a story that could be repeated elsewhere in the spectrum, if the government makes wise decisions.
Michael Marcus is an improbable revolutionary. An MIT-trained electrical engineer, he spent three years as an Air Force officer during the Vietnam war, designing communications systems for underground nuclear test detection at a time when the ARPANET—the original, military-sponsored version of the Internet—was first in use. After finishing active duty, he went to work at a Pentagon think tank, exploring potential military uses of emerging communications technologies.
In the summer of 1979, Marcus attended an Army electronic warfare workshop. As was typical at Army events, attendees were seated alphabetically. Marcus’s neighbor was Steve Lukasik, the FCC’s chief scientist. Lukasik had been Director of ARPA during the development of the ARPANET and then an ARPANET visionary at Xerox. He came to the FCC, not generally considered a technologically adventurous agency, because Carter administration officials were toying with the idea that existing federal regulations might be stifling innovation. Lukasik asked Marcus what he thought could stimulate growth in radio communications. Marcus answered, among other things, “spread spectrum.” His engineering was sound, but not his politics. People would not like this idea.
The military’s uses of spread spectrum were little known to civilians, since the Army likes to keep its affairs secret. The FCC prohibited all civil use of spread spectrum, since it would require, in the model the Commission had used for decades, trespassing on spectrum bands of which incumbents had been guaranteed exclusive use. Using lots of bandwidth, even at low power levels, was simply not possible within FCC regulations. Lukasik invited Marcus to join the FCC, to champion the development of spread spectrum and other innovative technologies. That required changing the way the FCC had worked for years.
Shortly after the birth of the Federal Radio Commission, the U.S. plum- meted into the worst depression it had ever experienced. In the 1970s, the FCC was still living with the culture of the 1930s, when national economic policies benevolently reined in free-market capitalism. As a general rule, innovators hate regulation, and incumbent stakeholders love it—when it protects their established interests. In the radio world, where spectrum is a limited, indispensable, government-controlled raw material, this dynamic can be powerfully stifling.
Incumbents, such as existing radio and TV stations and cell phone companies, have spectrum rights granted by the FCC in the past, perhaps decades ago, and renewed almost automatically. Incumbents have no incentive to allow use of “their” spectrum for innovations that may threaten their business. Innovators can’t get started without a guarantee from regulators that they will be granted use of spectrum, since investors won’t fund businesses reliant on resources the government controls and may decide not to provide. Regulators test proposals to relax their rules by inviting public comment, and the parties they hear from most are the incumbents—who have the resources to send teams to lobby against change. Their complaints predict disaster if the rules are relaxed. In fact, their doomsday scenarios are often exaggerated in the hope the regulators will exclude competition. Eventually, the regulators lose sight of their ultimate responsibility, which is to the public good and not to the good of the incumbents. It is just easier to leave things alone. They can legitimately claim to be responding to what they are being told, however biased by the huge costs of travel and lobbying. Regulatory powers meant to prevent electromagnetic interference wind up preventing competition instead.
And then there is the revolving door. Most communications jobs are in the private sector. FCC employees know that their future lies in the commercial use of the spectrum. Hundreds of FCC staff and officials, including all eight past FCC chairmen, have gone to work for or represented the businesses they regulated. These movements from government to private employment violate no government ethics rules. But FCC officials can be faced with a choice between angering a large incumbent that is a potential employer, and disappointing a marginal start-up or a public interest non-profit. It is not surprising that they remember that they will have to earn a living after leaving the FCC.
In 1981, Marcus and his colleagues invited comment on a proposal to allow low-power transmission in broad frequency bands. The incumbents who were using those bands almost universally howled. The FCC beat a retreat and attempted, in order to break the regulatory logjam, to find frequency bands where there could be few complaints about possible interference with other uses. They hit on the idea of deregulating three “garbage bands,” so called because they were used only for “industrial, scientific, and medical” (ISM) purposes. Microwave ovens, for example, cook food by pummeling it with 2.450GHz electromagnetic radiation. There should have been no complaints—microwave ovens were unaffected by “interference” from radio signals, and the telecommunications industry did not use these bands. RCA and GE complained anyway about possible low-power interference, but their objections were determined to be exaggerated. This spectrum band was opened to experimentation in 1985, on the proviso that frequency hopping or a similar technique be used to limit interference.
Marcus did not know what might develop, but engineers were waiting to take advantage of the opportunity. Irwin Jacobs founded QUALCOMM a few months later, and by 1990, the company’s cell phone technology was in wide-spread use, using a spread spectrum technique called CDMA. A few years later, Apple Computer and other manufacturers agreed with the FCC on standards to use spread spectrum for radio local area networks—“wireless routers,” for which Apple’s trademarked device is called the Airport. In 1997, when the FCC approved the 802.11 standard and the spectrum bands were finally available for use, the press barely noticed. Within three years, wireless networking was everywhere, and virtually all personal computers now come ready for WiFi.
For his efforts, Marcus was sent into internal exile within the FCC for seven years but emerged in the Clinton era and returned to spectrum policy work. He is now retired and working as a consultant in the private sector.
Michael Marcus’s web site, www.marcus-spectrum.com, has interesting materials, and opinions, about spectrum deregulation and spread spectrum history. |
The success of WiFi has opened the door to discussion of more radical spectrum-spreading proposals. The most extreme is UWB—“ultra wide band” radio. UWB returns, in a sense, to Hertz’s sparks, splattering radiation all across the frequencies of the radio spectrum. There are two important differences, however. First, UWB uses extremely low power—feasible because of the very large bandwidth. Power usage is so low that UWB will not interfere with any conventional radio receiver. And second, UWB pulses are extremely short and precisely timed, so that the time between pulses can symbolically encode a transmitted digital message. Even at extremely low power, which would limit the range of UWB transmissions to a few feet, UWB has the potential to carry vast amounts of information in short periods of time. Imagine connecting your high definition TV, cable box, and DVD player without cables. Imagine downloading your library of digital music from your living room audio system to your car while it is parked in your garage. Imagine wireless video phones that work better than wired audio phones. The possibilities are endless, if the process of regulatory relaxation continues.
What Does the Future Hold for Radio?
In the world of radio communications, as everywhere in the digital explosion, time has not stopped. In fact, digital communications have advanced less far than computer movie-making or voice recognition or weather prediction, because only in radio does the weight of federal regulation retard the explosive increase in computational power. The deregulation that is possible has only begun to happen.
What If Radios Were Smart?
Spread spectrum is a way of making better use of the spectrum. Another dramatic possibility comes with the recognition that ordinary radios are extremely stupid by comparison with what is computationally possible today. If taken back in time, today’s radios could receive the broadcasts of 80 years ago, and the AM radios of 80 years ago would work as receivers of today’s broadcasts. To achieve such total “backward compatibility,” a great deal of efficiency must be sacrificed. The reason for such backward compatibility is not that many 80-year-old radios are still in service. It’s that at any moment in time, the incumbents have a strong interest in retaining their market share, and therefore, in lobbying against efforts to make radios “smarter” so more stations can be accommodated.
WHAT DOES “SMART” MEAN? “Intelligent” or “smart” radio goes by various technical names. The two most commonly used terms are “software-defined radio” (SDR) and “cognitive radio.” Software-defined radio refers to radios capable of being reprogrammed to change characteristics usually implemented in hardware today (such as whether they recognize AM, FM, or some other form of modulation). Cognitive radio refers to radios that use artificial intelligence to increase the efficiency of their spectrum utilization. |
If radios were intelligent and active, rather than dumb and passive, vastly more information could be made available through the air- waves. Rather than broadcasting at high power so that signals could travel great distances to reach passive receivers, low-power radios could pass signals on to each other. A request for a particular piece of information could be transmitted from radio to radio, and the information could be passed back. The radios could cooperate with each other to increase the information flux received by all of them. Or multiple weak transmitters could occasionally synchronize to produce a single powerful beam for long-range communication.
Such “cooperation gains” are already being exploited in wireless sensor networking. Small, low-power, radio-equipped computers are equipped with sensors for temperature or seismic activity, for example. These devices can be scattered in remote areas with hostile environments, such as the rim of a smoldering volcano, or the Antarctic nesting grounds of endangered penguins. At far lower cost and greater safety than human observers could achieve, the devices can exchange information with their neighbors and eventually pass on a summary to a single high-power transmitter.
There are vast opportunities to use “smart” radios to increase the number of broadcast information options—if the regulatory stranglehold on the industry can be loosened and the incentives for innovation increased.
Radios can become “smarter” in another respect. Even under the “narrow- band” model for spectrum allocation, where one signal occupies only a small range of frequencies, cheap computation can make a difference. The very notion that it is the government’s job to prevent “interference,” enshrined in legislation since the 1912 Radio Act, is now anachronistic.
Radio waves don’t really “interfere,” the way people in a crowd interfere with each other’s movements. The waves don’t bounce off each other; they pass right through each other. If two different waves pass through the antenna of a dumb old radio, neither signal can be heard clearly.
To see what might be possible in the future, ask a man and a woman to stand behind you, reading from different books at about the same voice level. If you don’t focus, you will hear an incoherent jumble. But if you concentrate on one of the voices, you can understand it and block out the other. If you shift your focus to the other voice, you can pick that one out. This is possible because your brain performs sophisticated signal processing. It knows something about male and female voices. It knows the English language and tries to match the sounds it is hearing to a lexicon of word-sounds it expects English speakers to say. Radios could do the same thing—if not today, then soon, when computers become a bit more powerful.
But there is a chicken-and-egg cycle. No one will buy a “smart” radio unless there is something to listen to. No one can undertake a new form of broadcasting without raising some capital. No investor will put up money for a project that is dependent on uncertain deregulation decisions by the FCC. Dumb radios and inefficient spectrum use protect the incumbents from com- petition, so the incumbents lobby against deregulation.
Moreover, the incumbent telecommunications and entertainment industries are among the leading contributors to congressional election campaigns. Members of Congress often pressure the FCC to go against the public interest and in favor of the interests of the existing stakeholders. This problem was apparent even in the 1930s, when an early history of radio regulation stated, “no quasi-judicial body was ever subject to so much congressional pressure as the Federal Radio Commission.” The pattern has not changed.
In other technologies, such as the personal computer industry, there is no such cycle. Anyone who wants to innovate needs to raise money. Investors are inhibited by the quality of the technology and the market’s expected reaction to it—but not by the reactions of federal regulators. Overextended copyright protections have chilled creativity, as was dis- cussed in Chapter 6, but lawmakers are to blame for that problem, not unelected commissioners.
TV, ENTERTAINMENT, AND CONGRESS In the 2006 election campaigns, the TV, movie, and music industries contributed more than $12 million to the re-election campaigns of incumbents, more than the oil and gas industry. The three biggest contributors were Comcast Corp., Time Warner, and the National Cable and Telecommunications Association. |
From cell phones to wireless routers to keychain auto locks, wire- less innovations are devoured by the public, when they can be brought to market at all. To foster innovation, the regulatory stranglehold needs to be broken throughout the wireless arena, including broadcast technologies. The regulations are now the source of the scarcity that is used to justify the regulations!
But Do We Want the Digital Explosion?
Technologies converge. In 1971, Anthony Oettinger foresaw the line blurring between computing and communications. He called the emerging single technology “compunication.” Today’s computer users don’t even think about the fact that their data is stored thousands of miles away—until their Internet connection fails. Telephones were first connected using copper wires, and television stations first broadcast using electromagnetic waves, but today most telephone calls go through the air and most television signals go through wires.
Laws, regulations, and bureaucracies change much more slowly than the technologies they govern. The FCC still has separate “Wireless” and “Wire- line” bureaus. Special speech codes apply to “broadcast” radio and television, although “broadcasting” is an engineering anachronism.
The silo organization of the legal structures inhibits innovation in today’s layered technologies. Regulation of the content layer should not be driven by an outdated understanding of the engineering limits of the physical layer. Investments made in developing the physical layer should not enable the same companies to control the content layer. The public interest is in innovation and efficiency; it is not in the preservation of old technologies and revolving doors between regulators and the incumbents of the regulated industry.
But if the spectrum is freed up—used vastly more efficiently than it now is, and made available for innovative wireless inventions and far more “broad- cast” channels—will we like the result?
There are general economic and social benefits from innovations in wire- less technology. Garage door openers, Wiis, and toll booth transponders do not save lives, but wireless fire detectors and global positioning systems do. The story of WiFi illustrates how rapidly an unforeseen technology can become an essential piece of both business and personal infrastructure.
But what about television and radio? Would we really be better off with a million channels than we were in the 1950s with 13, or are today with a few hundred on satellite and cable? Won’t this profusion of sources cause a general lowering of content quality, and a societal splintering as de facto authoritative information channels wither? And won’t it become impossible to keep out the smut, which most people don’t want to see, whatever the rights of a few?
As a society, we simply have to confront the reality that our mindset about radio and television is wrong. It has been shaped by decades of the scarcity argument. That argument is now brain-dead, kept breathing on artificial life support by institutions that gain from the speech control it rationalizes. Without the scarcity argument, TV and radio stations become less like private leases on public land, or even shipping lanes, and more like … books.
There will be a period of social readjustment as television becomes more like a library. But the staggering—even frightening—diversity of published literature is not a reason not to have libraries. To be sure, there should be deter- mined efforts to minimize the social cost of getting the huge national investment in old TV sets retired in favor of million-channel TV sets. But we know how to do that sort of thing. There is always a chicken-and-egg problem when a new technology comes along, such as FM radios or personal computers.
When market forces govern what gets aired, we may not be happy with the results, however plentiful. But if what people want is assurance about what they won’t see, then the market will develop channels without dirty words and technologies to lock out the others. The present system stays in place because of the enormous financial and political influence of the incumbents— and because the government likes speech control.
How Much Government Regulation Is Needed?
Certainly, where words end and actions begin, people need government protection. Dr. Brinkley lost his medical license, which was right then, and would be right today.
In the new wireless world, government needs to enforce the rules for spectrum sharing—technologies that can work only if everyone respects power and bandwidth restraints. The government has to ensure that manufactured devices obey the rules, and that rogues don’t violate them. The government also has to help develop and endorse standards for “smart” radios.
It also has the ultimate responsibility for deciding if the dire warnings of incumbents about the risks imposed by new technologies are scientifically valid, and if valid, of sufficiently great social importance to block the advancement of engineering. A typical caution was the one issued in the fall of 2007 by the National Association of Broadcasters as it rolled out a national advertising campaign to block a new technology to locate unused parts of the TV spectrum for Internet service: “While our friends at Intel, Google, and Microsoft may find system errors, computer glitches, and dropped calls tolerable, broadcasters do not.” Scientific questions about interference should be settled by science, not by advertisements or Congressional meddling. We will always need an independent body, like the FCC, to make these judgments rationally and in the public interest.
If all that happens, the scarcity problem will disappear. At that point, government authority over content should—and constitutionally must—drop back to the level it is at for other non-scarce media, such as newspapers and books. Obscenity and libel laws would remain in place for wireless communication as for other media. So would any other lawful restrictions Congress might adopt, perhaps for reasons of national security.
Other regulation of broadcast words and images should end. Its legal foundation survives no longer in the newly engineered world of information. There are too many ways for the information to reach us. We need to take responsibility for what we see, and what our children are allowed to see. And they must be educated to live in a world of information plenty.
There is no reason to re-establish a “Fairness Doctrine,” like that which until 1987 required stations to present multiple points of view. If there were more channels, the government would not have any need, or authority, to second-guess the editorial judgment of broadcasters. Artificial spectrum scarcity has, in the words of Justice William O. Douglas, enabled “administration after administration to toy with TV or radio in order to serve its sordid or its benevolent ends.” Justice Frankfurter’s claim that “there is no room in the broadcast band for every business or school of thought” is now false.
Bits are bits, whether they represent movies, payrolls, expletives, or poems. Bits are bits, whether they are moved as electrons in copper wire, light pulses in glass fiber, or modulations in radio waves. Bits are bits, whether they are stored in gigantic data warehouses, on DVDs sent through the mail, or on flash drives on keychains. The regulation of free speech on broadcast radio and television is but one example of the lingering social effects of historical accidents of technology. There are many others—in telephony, for example. Laws and policies regulating information developed around the technologies in which that information was embodied.
The digital explosion has reduced all information to its lowest common denominator, sequences of 0s and 1s. There are now adapters at all the junctions in the world-wide networks of information. A telephone call, a personal letter, and a television show all reach you through the same mixture of media. The bits are shunted between radio antennas, fiber-optic switching stations, and telephone wiring many times before they reach you.
The universality of bits gives mankind a rare opportunity. We are in a position to decide on an overarching view of information. We can be bound in the future by first principles, not historical contingencies. In the U.S., the digital explosion has blown away much of the technological wrapping obscuring the First Amendment. Knowing that information is just bits, all societies will be faced with stark questions about where information should be open, where it should be controlled, and where it should be banned.