Exploding Data - Michael Chertoff - E-Book

Exploding Data E-Book

Michael Chertoff

0,0
12,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

A powerful argument for new laws and policies regarding cyber-security, from the former US Secretary of Homeland Security. The most dangerous threat we-individually and as a society-face today is no longer military, but rather the increasingly pervasive exposure of our personal information; nothing undermines our freedom more than losing control of information about ourselves. And yet, as daily events underscore, we are ever more vulnerable to cyber-attack. In this bracing book, Michael Chertoff makes clear that our laws and policies surrounding the protection of personal information, written for an earlier time, need to be completely overhauled in the Internet era. On the one hand, the collection of data-more widespread by business than by government, and impossible to stop-should be facilitated as an ultimate protection for society. On the other, standards under which information can be inspected, analysed or used must be significantly tightened. In offering his compelling call for action, Chertoff argues that what is at stake is not only the simple loss of privacy, which is almost impossible to protect, but also that of individual autonomy-the ability to make personal choices free of manipulation or coercion. Offering colourful stories over many decades that illuminate the three periods of data gathering we have experienced, Chertoff explains the complex legalities surrounding issues of data collection and dissemination today and charts a forceful new strategy that balances the needs of government, business and individuals alike.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Praise for Exploding Data:

‘When George Orwell wrote 1984, little did he suspect that most of us would willingly carry the tools of our surveillance in our pockets. Michael Chertoff brings his unmatched legal skills and experience to propose tougher restrictions on the use, retention and dissemination of the data that is exploding around us. This important book is a vote for sanity in the midst of chaotic change.’

Joseph S. Nye, Jr., author of The Future of Power

‘Few people — maybe only Michael Chertoff — could write a book like this. It combines his unique experience as Federal prosecutor, judge, assistant attorney general on 9/11 and then Secretary of Homeland Security to describe in layman’s language the ubiquity of “digital exhaust” we leave for others to learn about us and lawfully or unlawfully track us. This must-read book describes the barriers to ‘opting out’ and the need to modernise legal authorities if we are to protect both security and privacy.’

Jane Harman, CEO of the Wilson Center and former member of U.S. House of Representatives Intelligence and Homeland Security committees

‘Exploding Data: What a great title for a book in an age of surveillance, botnets, digital piracy, the internet of things and now Facebook’s loss of personal data. And former Secretary of Homeland Security Mike Chertoff does not disappoint as he introduces readers to the fundamentals of personal, national and global cyber security. Beyond education, his is also a call to action — to restructure laws, policies and practices in the face of technological disruption.’

General Michael V. Hayden, former Director of the National Security Agency, former Director of the Central Intelligence Agency, and author of Playing to the Edge: American Intelligence in the Age of Terror

EXPLODING DATA

ALSO BY MICHAEL CHERTOFF

Homeland Security: Assessing the First Five Years

 

 

 

First published in the United States of America in 2018 by Grove/Atlantic Inc.

First published in Great Britain in 2018 by Grove Press UK, an imprint of Grove/Atlantic Inc.

Copyright © Michael Chertoff, 2018

The moral right of Michael Chertoff to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act of 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of both the copyright owner and the above publisher of the book.

Every effort has been made to trace or contact all copyright-holders. The publishers will be pleased to make good any omissions or rectify any mistakes brought to their attention at the earliest opportunity.

1 3 5 7 9 8 6 4 2

A CIP record for this book is available from the British Library.

Hardback ISBN 978 1 61185 629 3

E-book ISBN 978 1 61185 924 9

Printed in Great Britain

Grove Press, UK

Ormond House

26–27 Boswell Street London

WC1N 3JZ

www.groveatlantic.com

For Meryl, Emily, and Philip, and for those who serve . . .

CONTENTS

Introduction: Big Data Is Watching You

1 What Is the Internet and How Did It Change Data?

2 How Did Law and Policy Evolve to Address Data 1.0 and 2.0?

3 Data 3.0 and the Challenges of Privacy and Security

4 Reconfiguring Privacy and Security in the Data 3.0 Universe

5 Data 3.0 and Controls on Private Sector Use of Data

6 Data 3.0 and Sovereignty: A Question of Conflict of Laws

7 Cyber Warfare: Deterrence and Response

Conclusion: Meeting the Challenge of Data 3.0: Recommendations for Law and Policy

Acknowledgments

Notes

Further Reading

Index

EXPLODING DATA

INTRODUCTION

BIG DATA IS WATCHING YOU

ON THE MORNING OF SEPTEMBER 11, 2001, while I drove to my Washington, D.C., office as assistant U.S. attorney general in charge of the Department of Justice Criminal Division, my deputy called to tell me that an airplane had crashed into New York City’s World Trade Center. Our initial assumption was that a private-plane pilot had lost his way. But within minutes, the TV news reported a second plane had smashed into the twin towers. That’s when we realized America was under attack.

Within minutes we were at the FBI Strategic Information and Operation Center, working with the FBI director to piece together who was attacking us and—importantly—how to prevent further strikes. As we began to pull together the facts, we learned a third aircraft had crashed into the Pentagon. A fourth plane, United Flight 77, had also been hijacked and was headed to Washington, D.C. The order was relayed to fighter jets to shoot down the plane; that became unnecessary after the passengers heroically stormed the cockpit and forced down the jet in Shanksville, Pennsylvania. America was at war.

Over the next hours and days, we pieced together the identities of the hijackers and concluded that al Qaeda was carrying out its declaration of war against America. Shortly after the attacks, President George W. Bush told the attorney general, “Don’t let this happen again.” That became our mandate.

This war was different from previous conflicts. Our enemies wore no uniforms and flew no flags; they sought to sneak up on us in the guise of ordinary civilians. Their weapons were homemade explosives. They mingled with the flow of travelers. Against this concealed attacker, radar that we relied upon to warn against enemy missiles or bombers was of no use.

How, then, to detect other terrorists and prevent them from carrying out attacks? We quickly concluded the answer lay in collecting large amounts of information about travelers and foreigners, and discerning the connections and behavior that showed links to a terror network. That meant not only reorienting our intelligence agency to focus on detecting the outlines of the terror network, but also obtaining the capability to detect patterns in the vast amounts of data being collected.

This paradigm shift in national security coincided with the expansion of the internet and the growth of commercial enterprises devoted to using data analytics for marketing and credit-scoring purposes. The private sector, infused with the urgency of preventing further attacks, began to develop new strategies to find the terrorist needle in the haystack. As the same time, the intelligence community expanded our data haystacks, using new or repurposed legal authorities (including the USA PATRIOT Act, which I participated in drafting) to accumulate information about the global flow of money, people, and communications. Over the next several years, as head of the Department of Justice’s Criminal Division and later as U.S. secretary of homeland security, I saw the awesome power of expanded data collection and analytics as tools to protect our nation and its people.

Not surprisingly, these new capabilities began to be deployed for other purposes, including commercial objectives. Just as the civilian internet was spawned by a Defense Department research effort, data collection and analytic tools used in counterterrorism were applied for a host of commercial purposes. Because I was witness to a major turning point in the growth of increasingly pervasive surveillance and the revolution in data collection, storage, and analysis—called “big data” by many—I was acutely aware of the power of data collection and analytics to benefit society. I also knew this information-gathering revolution would challenge America’s traditional notions and values in the areas of privacy and liberty.

As time has passed, I have been professionally and personally involved in guiding, prompting, using, and worrying about the ever-expanding harvesting of personal data by both governments and, even more so, the private sector. Perhaps more than most, I understand how much data each of us now generates for collection. That can be beneficial. It can also be very dangerous.

Having spent most of my professional career as a lawyer and as a judge, I am also mindful that our legal rules and policies established how these vast new capabilities would be deployed; yet most of this legal framework was created in the 20th century, when the data landscape was far sparser than it is today. As one who wants to encourage the positive effects of the data revolution, I believe that we are overdue to recast the rules of the road. To be sure, this data revolution should preserve, rather than undermine, our fundamental values.

This book is designed to educate the interested citizen about the scope and implications of the revolution in data generation, collection, and analytics. I also lay out a vision to retain the security and economic benefits of these developments without unwittingly sacrificing our privacy, liberty, and civic values. To illustrate how rapidly this change is coming upon us, here are four hypothetical but realistic scenarios—three of which are already upon us.

One: A young New Yorker, Alan, becomes interested in the ideology of radical jihadism. After searching the internet, he happens upon a website managed by recruiters for a terrorist organization in Syria. The terrorists detect Alan’s interest and make contact with him by sending an email to his Internet Protocol (IP) address with instructions on how to anonymize communications by downloading free software. With excitement, he steps into the shadows.

Although Alan follows the jihadis’ instructions, he also begins to discuss his increasing radicalization with friends on Facebook. He posts pictures of himself with a beard and wearing a thobe, the traditional robe worn in many Arab countries. He discusses his developing political views with his friends. He also visits websites that instruct viewers on how to build a bomb using household products and chemicals that can be easily purchased in gardening stores. At one point, Alan goes online to explore travel routes to Syria, although he does not buy a ticket. Ben, a friend of Alan’s who has in fact traveled to Syria, phones him on several occasions to encourage him to come. Alan responds by email in veiled language that he intends to carry out a task in the United States that will be “heavenly.” Alan also visits a local gardening superstore, buying quantities of chemicals greater than would normally be used for hobby gardening in New York City.

Unbeknownst to Alan, intelligence and law enforcement officials monitoring transnational communications, both telephone and internet, have detected his contact with Ben. But these officials do not intercept the content of the two men’s communication in real time. Because Syria is a known terrorist area of operation, the authorities seek permission from a special federal judge to collect as much information as possible about Alan’s communications. As soon as they can, the Feds want to determine whether Alan poses a threat.

Specifically, they want to subpoena metadata—email records with numbers or IP addresses—showing Alan’s contacts for the last two years; the Feds also want to obtain records of online tweets and social media postings by Alan, as well as records of his online searches and website visits. The federal agents also subpoena his credit card records.

Examining this data reveals most of Alan’s online and communications activity for many months. At the same time, application of analytic algorithms to this huge cache of data yields an outline of Alan’s evolving extreme views. He has made efforts to travel to Syria, an overseas terrorist hotbed. Alan has made contact with identifiable terrorists and researched bomb-making techniques. His credit card records show the alarming accumulation of chemicals that correlate to the bomb-making instructions on the website Alan visited.

The agents go further. Contacting the NYPD, they obtain several months of footage from video cameras positioned in lower Manhattan’s financial district. Although the volume of this footage is far too great for human eyes to review, video analytic tools with facial-recognition capability quickly identify that in the last two months Alan has been loitering near the Federal Reserve building in New York.

Based on this information, the authorities manufacture a persuasive cover story that permits an undercover agent to befriend Alan by pretending to be a violent extremist. The agent gains Alan’s confidence by expressing views strikingly similar to those Alan has expressed online. Eventually, Alan reveals to him the intent to carry out a bombing at the Federal Reserve. Alan is arrested.

Two: Brian and Kate are shopping for a birthday present for their six-year-old daughter, Ashley. At one store, they encounter Talkie Terry, a doll whose ability to listen and respond to human speech is “so lifelike that your child will have a new friend.” As explained by Omnicorp, the manufacturer, Terry is able to recognize speech and instantly relay it wirelessly to a server housing thousands of potential responses to any request or statement a child makes. Moreover, Terry’s server retains a file on past interactions with each child, so Terry gets to “know” the child—Terry will be able to remind Ashley of past events, make suggestions, and even initiate conversation. Omnicorp touts Terry as a learning tool. Terry will encourage children to learn a language, do chores, and appreciate moral lessons. Even better, parents can link Terry to their smartphones with an app, so they can monitor the child’s activities in the vicinity of Terry, since the doll is never really turned “off.”

Best of all, Terry is inexpensive—not surprising when you realize that the doll’s real value is in the vast amount of data it collects for Omnicorp to use in other business activities, including mail-order retail, financial services, and information brokering.

Indeed, Ashley’s conversation, and all conversation within earshot of Terry’s sensitive and always operational microphone, is not only retained on Omnicorp’s server but also mined by algorithms, revealing a good deal of information about this family’s plans and preferences. When Brian links Terry to his smartphone, it plants a cookie to monitor the websites he visits. And the next version of Terry will be even better, with the capability to emit an ultrahigh-frequency sound wave—inaudible to humans—that links up with other “smart” devices in the household, like a web-enabled television. Terry will let Omnicorp record what the family members watch on television as well as other data about their lives.

All of this is, of course, fully disclosed in the 75-page “terms and conditions of use” consent form that Brian clicked on when he connected Terry to the internet. Brian was far too busy trying to set up the doll for his excited daughter to carefully read the form. And once Brian brought the doll home for Ashley, was he really going to disappoint his daughter by taking it back?

Talkie Terry is modeled on My Friend Cayla, a doll banned by German authorities as an illegal eavesdropping device. Cayla records speech and can be accessed via Bluetooth. Other internet-connected toys include Mattel’s Hello Barbie.1

Three: Carl is a young assistant professor who teaches privacy law, and, of course, he prides himself on vigilantly guarding his own privacy. Carl does not have a personal social media site and is careful about what he tweets. He does not visit websites that require you to download cookies that track online behavior. He uses an encrypted email service and does not authorize the service provider to mine his email for personal data. Carl feels he is prudent about the amount of personal data he allows others to access.

But Carl enjoys modern “smart” technology and unwittingly leaves quite a bit of digital exhaust.2 A typical day begins when his alarm goes off, and its wireless connection to the coffeemaker turns on the brew cycle. Carl checks the Fitbit around his wrist to see how well he slept. That data, along with how many steps he takes today and what his heart rate is, will be continuously uploaded to his smartphone.

For breakfast, Carl makes himself a big meal of bacon and eggs. He uses the smart refrigerator to update his connected shopping list with bacon and a few other supplies; an order is automatically placed with a local grocer to deliver a quantity that approximates the current rate at which he indulges. In his car, Carl buckles up, automatically engaging his GPS and emergency communications link, as well as his internet-based radio. To save money, Carl has also signed up for an insurance-based device that records his driving behavior. These devices relay to his insurer the information that Carl tends to abruptly accelerate and decelerate, and that his typical driving route to work takes him on streets with a higher-than-average incidence of traffic accidents.

After a day of teaching, Carl uses his smartphone to add to his week’s grocery list, begun earlier at home on the refrigerator panel. After buying the items on his shopping-list app, he recalls that he is due that evening at a farewell reception for a colleague at a local bar. Using his phone as a navigation device, Carl stops in for a drink, and his colleague snaps some smartphone photos of Carl at the party. These photos automatically upload to the colleague’s social media account.

Soon after, Carl gets into his car and heads home.

That evening, Carl watches the news and a political-satire show on his web-enabled television. The high-definition TV automatically turns on when Carl enters the room, and can “suggest” viewing options based on his viewing preferences. The service provider can also record anytime Carl enters or leaves the room, and even when his attention shifts from the screen to something else in the room. Also, ultrahigh-frequency sound waves emitted by the television3—at a pitch inaudible to humans—automatically pair Carl’s smartphone to his television,4 recording when Carl searches on his phone for an item just advertised on the television.5 New analytic software actually allows the service provider to determine whether Carl likes or dislikes what he is viewing based upon microscopic eye movements picked up by his smartphone that correlate with positive or negative reactions.6

Carl’s health insurer later raises his premium because his eating, drinking, exercise, and sleep patterns could be healthier. The insurer suggests that a change in his diet and more exercise will trigger a special “healthy lifestyle” discount that will lower his rates. This is presented as a positive “nudge” toward reducing illness. His auto insurer also informs Carl that his rates will rise because his driving style is erratic, and he has been linked via social media to drinking establishments just before driving his vehicle. Commercial marketers send Carl advertising material based on preferences established from his stream of digital exhaust. Election-campaign specialists target him with ads based upon a behavioral analysis of Carl’s reaction to news events and political commentary.

All the features in Carl’s story currently exist. Progressive Insurance “rewards” drivers who install a monitoring device in their automobiles. One employer pays a bonus to employees who get seven hours of sleep a night, as recorded on their tracking devices. Eye-tracking technologies are currently being piloted on video systems.7

Four: James’s eyes pop open, prying his thoughts from slumber. Once again, he has woken up at 5:43 a.m. James always does. The monitor never lets him linger in bed. He sometimes wonders what the early-21st-century “snooze function” might have been like. He has never experienced such a thing but has seen it in a few old movies. In modern 2084, the ideas of the previous century have not been deemed relevant and most of the media has been destroyed.

James has no such luxury. At the optimal awakening time, the monitor, already aware of his sleep phase, begins playing sounds to generate his awakening. The audible portions are supposed to be relaxing. James has chosen beach waves that remind him of his childhood on Cape Cod. Nearly inaudible portions connect with his subconscious, causing his body to begin waking whether he wants to or not.

Today he gets up quickly. Previously, the monitor’s neural scan of James determined that he had been too slow in pulling out of a deep sleep, so it has increased the amount of subliminal communication. James doesn’t know what it would be like to wake up late; the thought is so foreign to his prescribed daily routine that it occurred to him only after he had seen one of those old movies.

After James showers and makes his way toward the kitchen, the monitor presents him with three healthy breakfast options matching his weight, age, and health history. He is glad that he is still young enough to be allowed bacon, and he chooses a breakfast burrito heavy in kale and infused with egg whites. If he eats more than what is presented, the questioning begins. The same thing happens if James refuses to eat. The last time he attempted to skip breakfast, the monitor had detected his failure to accumulate the necessary caloric intake and, since this information was coupled with the fact that his daily bloodwork showed a rise in his white blood cell count, James was deemed too ill to work and was sent to bed.

Entering his travel pod, he begins his commute to the office.

Upon James’s arrival, he is greeted by the monitors stationed outside the building, “Welcome, James Jones. The morning meeting begins at 9:00 in conference room B. Six out of eight attendees have arrived and are stationed in the room. Marcos is 2 minutes and 46 seconds away from arrival.”

“Chipped” at birth, James is accustomed to having his location known and available to others. Initially developed as an expensive and optional parental security feature to ensure that rescue would be quick in case of kidnapping or accident, the chips were eventually demanded by everyone. Mass production and government help have made them affordable. Because of their usefulness in convicting criminals, society has come to accept them. Therefore, anyone who wants to find James can do so. As a by-product of the chip, his life’s history can be played out as a simple series of circular patterns that rarely shift. It isn’t as if he consciously thinks about it. His behavior morphed because he just doesn’t want to be part of the interrogation that inevitably comes if he happens to be in the wrong place at the wrong time. Life is easier if his transit patterns match what is expected.

Although he already knows everyone in the conference room, as James enters, his “eyeglass” implants identify each participant by name. Although this technology is relatively new, James still finds it odd to view the world in “assisted mode.” As he scans the room, an indicator showing each person’s name is tagged in his vision. If he desires, James could probe for more information—his colleagues’ education and work background, intelligence score, family members, and even medical history—by accessing the visual internet database.

After first receiving his eyeglass, James had regularly gone back to review meetings from his colleague Amy’s perspective, hoping he might catch how often she had glanced his way. At first, she was stealing quick looks. She stopped doing this when the monitor flagged her viewing patterns as being irrelevant and a waste of corporate resources. James thinks Amy might be interested in him, but it is too hard to find a legitimate reason to reach out to her. Eventually, he gives up.

Crime rates have fallen tremendously. It is too hard to do something illegal when the crime is almost always captured by either an eyeglass or one of the scanning monitors installed as part of every streetlight. Homeowners installed their own scanning monitors when criminals began to target homes without such devices. The monitors proliferated, and on a vast scale. It wasn’t mandated; it was as if the network spread on its own.

The dramatic drop in crime rates is due to not only the increased surveillance but also the increased ability of the organization to predict bad thoughts, ideas, and ultimately actions. This started as an improvement to the archaic lie-detector testing. Eye movements were first mapped to speech. This data was then processed with behavior recorded by the myriad sensors and video cameras throughout the city. From this data, predictive analytics are able to identify predisposition to erratic and even dangerous behavior.

Thoughts that cross a high negative threshold are automatically reported to the police.

James shakes himself out of his daydream. He isn’t sure if the authorities can piece together his random thoughts into a coherent stream, but he does not doubt for a second that he is being monitored. Unsure of what thoughts might trigger a report to the police, James finds it simpler to focus only on the task at hand. Friends are a distraction, and he always ends up wondering which one of them is an agent. James wonders about just what is, no longer about what might be.

Each preceding scenario shows the revolutionary impact of universal data collection and analysis on people in the current day or the not-too-distant future. The first three are based on current technologies; the fourth is a dramatization of where they might lead. If the fourth scenario seems far-fetched, consider the combination of things already on the market or in development: facial recognition, automated cars, pervasive closed-circuit TV in many cities, and some companies’ use of bird’s-eye cameras overlooking workstations and voluntary (so far) microchips implanted in employees.8 Of course, the effects of using any one of these devices may be good or bad. Unfortunately, too often government policymakers, judges, and everyday consumers poorly understand the consequences of the big data revolution.

The effects of big data collection are playing out faster today than ever before. Information sharing has allowed new technologies to be created at an ever-faster pace. Technologies designed for security and classified by governments now quickly find their way into everyday consumers’ hands. The commercial drive to enhance marketing tools also drives relentless innovation in the ability to collect and exploit data. Because today’s information and networks have so many connection points, it is harder and harder to prevent information from leaking. Information doesn’t disappear readily—it sticks. Taken together, these features of modern information technology have sped up the spread of ideas and our personal information.

As an unintended by-product, however, growing inter-connectivity has had the effect of dramatically increasing threats to our security and privacy. The proliferation of wirelessly connected devices—often mobile—expands the surface area of network entry points through which hackers can penetrate our information and communication networks. By the same token, the centralized collection of our personal data by government and corporations means it is far easier for hackers to steal that data at a huge scale. So, consider the following recent cyber data threats: Equifax, the credit agency, loses data pertaining to 143 million Americans; Yahoo has 3 billon users’ accounts compromised; and the U.S. Office of Personnel Management, the government’s human resources agency, has highly sensitive security files relating to over 25 million employees and applicants stolen, perhaps by a foreign nation.

History does show that technological changes bring with them social and normative changes, allowing societies to adapt. So, the development of the automobile led to the adoption of safety requirements and the regulation of traffic patterns. Because in modern democracies people ultimately define the rules that determine or restrict their behavior—the social contract—the rules must adjust to meet the needs of the day. But new technology doesn’t always fit within the existing social construct. Trying to force it into an outdated legal system may even break the system. Eventually people react by demanding fundamental changes to the rules. It falls to elected officials, administrators, and courts to recognize changed circumstances and then reconstruct legal and policy standards.

As the social contract is renegotiated, a return to basic principles and values is necessary. Standing outside the outmoded paradigms and automated legal categories, we must redetermine what our core social and ethical values are. What’s in danger and what needs protection? Often the constitutional principles of liberty, security, freedom of expression and association, and independence must be weighed against each other, possibly with the interests of society balanced against the rights and interests of the individual.

The rise of big data capabilities is often critiqued from the standpoint of loss of privacy. But when technologies collect, catalog, and exploit data—much of which is willingly submitted by people—or when data is collected in open public spaces, then privacy is too narrow a concept to reflect what may be at risk.

What is actually at stake is the freedom to make the personal choices that affect our values and our destiny. A person can be manipulated and coerced in many ways, but the most ominous involve the pressure that comes with constant, ongoing surveillance of our actions. Our parents shape our behavior not only by teaching us as children, but also by the examples they set. They hope to instill strong value systems in their children even as they hope that their children will gain new opportunities, ideas, and experiences to mold them. As we grow older, we have more and more opportunities to choose our own way and explore new ideas.

But that freedom can be undermined when we lose control of information about ourselves—our actions, beliefs, relationships, and even our flaws and mistakes.

Modern analytic tools have the potential to form a detailed picture of almost any individual’s activities. It is extremely difficult today to “opt out” of the data stream. Modern life generates data as a necessary part of the convenient services we enjoy. Information collected today is necessarily broader than what was collected in years past; it lasts longer; and it is put to more uses. But those who collect and aggregate that data have an increased power to influence and even coerce our behavior—possibly through social shaming and financial incentives and penalties.

Today’s explosion of big data is often justified as promoting healthy lifestyles, convenient marketing, and even easier and more informed political engagement. But ubiquitous surveillance is a classic tool of oppression as epitomized by the Big Brother of George Orwell’s 1984, which watches constantly. Are we on the verge of inviting this oppression surveillance into our own lives, albeit in the deceptively benign guise of a “Big Nanny” who watches over us “for our own good”?

But the data explosion raises risks to more than our freedom. The expansion of online networks that are connected to physical systems, and that even control their operation, has dramatically expanded the ability of malign individuals to interfere with the physical world. This affects everything from generating the electricity that powers the grid to the performance of our automobiles. This expansion of network-controlled mechanical systems places an increasing burden on governments, private parties, and ordinary citizens to be able to secure their computers and systems against a surge of attacks from around the world. Traditional rules governing security and liability must adapt to and address these burgeoning threats. And this necessity to protect our world may conflict with the very real concern about the growing collection of our personal information.

One point should be clear. While it is customary to refer to modern big data developments as a result of the internet, that is an oversimplification. These developments were caused by changes in the way we collect, store, transmit, and analyze data, as well as in the interaction between digital transmissions and the operation of control systems that regulate our physical world. As I will describe, a confluence of circumstances drove these changes. Certainly, the creation of the internet was one, driven by the need for a flexible communications system that could survive natural or man-made destruction of the normal methods of communication. Other strides in data collection and analytics are the result of a new national security environment in which threats are no longer nation-states but instead online enemies who can be detected and thwarted only by monitoring the global movement of people, money, and communications. And even more profoundly, data has become valuable as a tool for targeted marketing and as a means of reducing the cost of executing commercial transactions. In short, the data revolution was powered by, and powered, the transformative expansion of our global economy.

Yet these revolutionary changes in the use of data have far outpaced our legal and policy architecture. We want to establish rules of the road to reconcile the competing demands for security, privacy, autonomy, economic growth, and convenience. But as security expert Bruce Schneier has observed, our legal system can be slow to adapt to technological change.9 As I will outline here, we should not try to fit new technologies into the procrustean bed of existing outdated legal doctrines. What we need is to go back to basics: setting forth a clear understanding of the values we want to preserve, what challenges the world of big data presents, and how our legal system should evolve to address those challenges.