Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
In this sharp and witty book, long-time Silicon Valley observer and author Andrew Keen argues that, on balance, the Internet has had a disastrous impact on all our lives. By tracing the history of the Internet, from its founding in the 1960s to the creation of the World Wide Web in 1989, through the waves of start-ups and the rise of the big data companies to the increasing attempts to monetize almost every human activity, Keen shows how the Web has had a deeply negative effect on our culture, economy and society. Informed by Keen's own research and interviews, as well as the work of other writers, reporters and academics, The Internet is Not the Answer is an urgent investigation into the tech world - from the threat to privacy posed by social media and online surveillance by government agencies, to the impact of the Internet on unemployment and economic inequality. Keen concludes by outlining the changes that he believes must be made, before it's too late. If we do nothing, he warns, this new technology and the companies that control it will continue to impoverish us all.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 410
Veröffentlichungsjahr: 2015
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
THEINTERNETIS NOT THEANSWER
ALSO BY ANDREW KEEN
The Cult of the Amateur: How Today’s Internet Is Killing Our Culture
Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us
First published in hardback in the United States of America in 2015 by AtlanticMonthly Press, an imprint of Grove/Atlantic, Inc.
First published in hardback in Great Britain in 2015 by Atlantic Books, an imprint of Atlantic Books Ltd.
Copyright © Andrew Keen, 2015
The moral right of Andrew Keen to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act of 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of both the copyright owner and the above publisher of this book.
1 2 3 4 5 6 7 8 9
A CIP catalogue record for this book is available from the British Library.
Hardback ISBN: 978 1 78239 340 5Trade Paperback ISBN: 978 1 78239 341 2EBook ISBN: 978 1 78239 342 9Paperback ISBN: 978 1 78239 343 6
Printed in Great BritainAtlantic BooksAn Imprint of Atlantic Books LtdOrmond House26–27 Boswell StreetLondonWC1N 3JZ
www.atlantic-books.co.uk
In Memory of V Falber & Sons
CONTENTS
Preface: The Question
Introduction: The Building Is the Message
1
The Network
2
The Money
3
The Broken Center
4
The Personal Revolution
5
The Catastrophe of Abundance
6
The One Percent Economy
7
Crystal Man
8
Epic Fail
Conclusion: The Answer
Acknowledgments
Notes
PREFACE
THE QUESTION
The Internet, we’ve been promised by its many evangelists, is the answer. It democratizes the good and disrupts the bad, they say, thereby creating a more open and egalitarian world. The more people who join the Internet, or so these evangelists, including Silicon Valley billionaires, social media marketers, and network idealists, tell us, the more value it brings to both society and its users. They thus present the Internet as a magically virtuous circle, an infinitely positive loop, an economic and cultural win-win for its billions of users.
But today, as the Internet expands to connect almost everyone and everything on the planet, it’s becoming self-evident that this is a false promise. The evangelists are presenting us with what in Silicon Valley is called a “reality distortion field”—a vision that is anything but truthful. Instead of a win-win, the Internet is, in fact, more akin to a negative feedback loop in which we network users are its victims rather than beneficiaries. Rather than the answer, the Internet is actually the central question about our connected twenty-first-century world.
The more we use the contemporary digital network, the less economic value it is bringing to us. Rather than promoting economic fairness, it is a central reason for the growing gulf between rich and poor and the hollowing out of the middle class. Rather than making us wealthier, the distributed capitalism of the new networked economy is making most of us poorer. Rather than generating more jobs, this digital disruption is a principal cause of our structural unemployment crisis. Rather than creating more competition, it has created immensely powerful new monopolists like Google and Amazon.
Its cultural ramifications are equally chilling. Rather than creating transparency and openness, the Internet is creating a panopticon of information-gathering and surveillance services in which we, the users of big data networks like Facebook, have been packaged as their all-too-transparent product. Rather than creating more democracy, it is empowering the rule of the mob. Rather than encouraging tolerance, it has unleashed such a distasteful war on women that many no longer feel welcome on the network. Rather than fostering a renaissance, it has created a selfie-centered culture of voyeurism and narcissism. Rather than establishing more diversity, it is massively enriching a tiny group of young white men in black limousines. Rather than making us happy, it’s compounding our rage.
No, the Internet is not the answer. Not yet, anyway. This book, which synthesizes the research of many experts and builds upon the material from my two previous books about the Internet,1 explains why.
THEINTERNETIS NOT THEANSWER
INTRODUCTION
THE BUILDING IS THE MESSAGE
The writing is on the San Francisco wall. The words WE SHAPE OUR BUILDINGS; THEREAFTER THEY SHAPE US have been engraved onto a black slab of marble beside the front door of a social club called the Battery in downtown San Francisco. These words read like an epigram to the club. They are a reminder, perhaps even a warning to visitors that they will be shaped by the memorable building that they are about to enter.
Lauded by the San Francisco Chronicle as the city’s “newest and biggest social experiment,”1 the Battery certainly is an ambitious project. Formerly the site of an industrial manufacturer of marble-cutting tools called the Musto Steam Marble Mill, the building has been reinvented by its new owners, two successful Internet entrepreneurs named Michael and Xochi Birch. Having sold the popular social media network Bebo to AOL for $850 million in 2008, the Birches acquired the Musto building on Battery Street for $13.5 million a year later and invested “tens of millions of dollars”2 to transform it into a social club. Their goal is to create a people’s club—a twenty-first-century House of Commons that, they promise, “eschews status,”3 allowing its members to wear jeans and hoodies and discouraging membership from stuffy old elites who “wear a business suit to work.”4 It’s an inclusive social experiment that the Birches, borrowing from Silicon Valley’s lexicon of disruption, call an “unclub”—an open and egalitarian place that supposedly breaks all the traditional rules and treats everyone the same, irrespective of their social status or wealth.
“We are fans of the village pub where everyone knows everyone,” bubbled Michael Birch. His friends liken his irrepressible optimism to that of Walt Disney or Willy Wonka. “A private club can be the city’s replacement for the village pub, where you do, over time, get to know everyone and have a sense of emotional belonging.”5
The club “offers privacy” but it isn’t about “the haves and the have-nots,” Xochi Birch added, echoing her husband’s egalitarianism. “We want diversity in every sense. I view it as us trying to curate a community.”6
The Battery is thus imagined by the Birches to be anything but a traditional “gentlemen’s club,” the kind of exclusive establishment to which a twentieth-century aristocrat—a Winston Churchill, for example—might belong. And yet it was Churchill who, to inaugurate the reconstructed British House of Commons after it had been, as he put it, “blown to smithereens” in May 1941 by bombs dropped from German aircraft, originally said in October 1944 that “we shape our buildings; thereafter they shape us.” And so the words of the Right Honorable Sir Winston Leonard Spencer Churchill, the son of the Viscount of Ireland and the grandson of the seventh Duke of Marlborough, had become the epigram for this twenty-first-century San Francisco unclub that claims to eschew status and embrace diversity.
Had the Birches been more prescient, they would have engraved a different Winston Churchill quote outside their club. “A lie gets halfway around the world before the truth has a chance to get its pants on,” Churchill’s remix of a Mark Twain witticism,7perhaps. But that’s the problem. In spite of being toolmakers of our digital future, Michael and Xochi Birch aren’t prescient. And the truth about the Battery—whether or not it has had a chance to get its jeans on—is that the well-meaning but deluded Birches have unintentionally created one of the least diverse and most exclusive places on earth.
The twentieth-century media guru Marshall McLuhan, who, in contrast with the Birches, was distinguished by his prescience, famously said that the “medium is the message.” But on Battery Street in downtown San Francisco, it’s the building that is the message. Rather than an unclub, the Battery is an untruth. It offers a deeply troubling message about the gaping inequalities and injustices of our new networked society.
In spite of its relaxed dress code and self-proclaimed commitment to cultural diversity, the Battery is as opulent as the most marble-encrusted homes of San Francisco’s nineteenth-century gilded elite. All that is left of the old Musto building is the immaculately restored exposed brickwork displayed inside the building and the slab of black marble at the club’s entrance. The 58,000-square-foot, five-story club now boasts a 200-person domestic staff, a 23,000-pound floating steel staircase, a glass elevator, an eight-foot-tall crystal chandelier, restaurants serving dishes like wagyu beef with smoked tofu and hon shimeji mushrooms, a state-of-the-art twenty-person Jacuzzi, a secret poker room hidden behind a bookcase, a 3,000-bottle wine cellar boasting a ceiling constructed from old bottles, a menagerie of taxi-dermied beasts, and a fourteen-room luxury hotel crowned by a glass-pavilioned penthouse suite with panoramic views of San Francisco Bay.
For the vast majority of San Franciscans who will never have the good fortune of setting foot in the Battery, this social experiment certainly isn’t very social. Instead of a public House of Commons, the Birches are building a privatized House of Lords, a walled pleasure palace for the digital aristocracy, the privileged one percent of our two-tiered networked age. Rather than a village pub, it’s a nonfictional version of the nostalgic British television series Downton Abbey—a place of feudal excess and privilege.
Had Churchill joined the Birches’ social experiment, he certainly would have found himself among some of the world’s richest and best-connected people. The club opened in October 2013 with an exclusive list of founding members that reads like a who’s who of what Vanity Fair calls the “New Establishment,” including the CEO of Instagram, Kevin Systrom; former Facebook president Sean Parker; and the serial Internet entrepreneur Trevor Traina, the owner of the most expensive house in San Francisco, a $35 million mansion on “Billionaire’s Row.”8
It’s all too easy, of course, to ridicule the Birches’ unclub and their failed social experiment in downtown San Francisco. But unfortunately, it isn’t all that funny. “The bigger issue at hand,” as the New Yorker’s Anisse Gross reminds us about the Battery, is that “San Francisco itself is turning into a private, exclusive club”9 for wealthy entrepreneurs and venture capitalists. Like its secret poker room, the Battery is a private, exclusive club within a private, exclusive club. It encapsulates what the New York Times’ Timothy Egan describes as the “dystopia by the Bay”—a San Francisco that is “a one-dimensional town for the 1 percent” and “an allegory of how the rich have changed America for the worse.”10
The Birches’ one-dimensional club is a 58,000-square-foot allegory for the increasingly sharp economic inequities in San Francisco. But there’s an even bigger issue at stake here than the invisible wall in San Francisco separating the few “haves” from the many “have-nots,” including the city’s more than five thousand homeless people. The Battery may be San Francisco’s biggest experiment, but there’s a much bolder social and economic experiment going on in the world outside the club’s tinted windows.
This experiment is the creation of a networked society. “The most significant revolution of the 21st century so far is not political. It is the information technology revolution,” explains the Cambridge University political scientist David Runciman.11 We are the brink of a foreign land—a data-saturated place that the British writer John Lanchester calls a “new kind of human society.”12 “The single most important trend in the world today is the fact that globalization and the information technology revolution have gone to a whole new level,” adds the New York Times columnist Thomas Friedman. Thanks to cloud computing, robotics, Facebook, Google, LinkedIn, Twitter, the iPad, and cheap Internet-enabled smartphones, Friedman says, “the world has gone from connected to hyper-connected.”13
Runciman, Lanchester, and Friedman are all describing the same great economic, cultural, and, above all, intellectual transformation. “The Internet,” Joi Ito, the director of the MIT Media Lab, notes, “is not a technology; it’s a belief system.”14 Everything and everyone are being connected in a network revolution that is radically disrupting every aspect of today’s world. Education, transportation, health care, finance, retail, and manufacturing are now being reinvented by Internet-based products such as self-driving cars, wearable computing devices, 3-D printers, personal health monitors, massive open online courses (MOOCs), peer-to-peer services like Airbnb and Uber, and currencies like Bitcoin. Revolutionary entrepreneurs like Sean Parker and Kevin Systrom are building this networked society on our behalf. They haven’t asked our permission, of course. But then the idea of consent is foreign, even immoral, to many of these architects of what the Columbia University historian Mark Lilla calls our “libertarian age.”
“The libertarian dogma of our time,” Lilla says, “is turning our polities, economies and cultures upside down.”15 Yes. But the real dogma of our libertarian age lies in glamorizing the turning of things upside down, in rejecting the very idea of “permission,” in establishing a cult of disruption. Alexis Ohanian, the founder of Reddit, the self-described “front page of the Internet,” which, in 2013, amassed 56 billion page views from the 40 million pages of unedited content created by 3 million users,16 even wrote a manifesto against permission. In Without Their Permission,17 Ohanian boasts that the twenty-first century will be “made,” not “managed” by entrepreneurs like himself who use the disruptive qualities of the Internet for the benefit of the public good. But like so much of Internet’s mob-produced, user-generated content, Reddit’s value to this public good is debatable. The site’s most popular series of posts in 2013, for example, concerned its unauthorized misidentification of the Boston Marathon bomber, a public disservice that the Atlantic termed a “misinformation disaster.”18
Like Michael and Xochi Birch’s San Francisco unclub, the Internet is presented to us by naïve entrepreneurs as a diverse, transparent, and egalitarian place—a place that eschews tradition and democratizes social and economic opportunity. This view of the Internet encapsulates what Mark Lilla calls the “new kind of hubris” of our libertarian age, with its trinitarian faith in democracy, the free market, and individualism.19
Such a distorted view of the Internet is common in Silicon Valley, where doing good and becoming rich are seen as indistinguishable and where disruptive companies like Google, Facebook, and Uber are celebrated for their supposedly public-spirited destruction of archaic rules and institutions. Google, for example, still prides itself as being an “uncompany,” a corporation without the traditional structures of power—even though the $400 billion leviathan is, as of June 2014, the world’s second most valuable corporation. It’s active and in some cases brutally powerful in industries as varied as online search, advertising, publishing, artificial intelligence, news, mobile operating systems, wearable computing, Internet browsers, video, and even—with its fledgling self-driving cars—the automobile industry.
In the digital world, everyone wants to be an unbusiness. Amazon, the largest online store in the world and a notorious bully of small publishing companies, still thinks of itself as the scrappy “unstore.” Internet companies like the Amazon-owned shoe store Zappos, and Medium, an online magazine founded by billionaire Twitter founder Ev Williams, are run on so-called holacratic principles—a Silicon Valley version of communism where there are no hierarchies, except, of course, when it comes to wages and stock ownership. Then there are the so-called unconferences of Web publishing magnate Tim O’Reilly—exclusive retreats called the Friends of O’Reilly (FOO) Camp—where nobody is formally in charge and the agenda is set by its carefully curated group of wealthy, young, white, and male technologists. But, like the Birches’ club with its 3,000-bottle wine cellar boasting a ceiling constructed from old bottles, massively powerful and wealthy multinationals like Google and Amazon, and exclusively “open” events for the new elite like FOO Camp, aren’t quite as revolutionary as they’d have us believe. The new wine in Silicon Valley may be digital, but—when it comes to power and wealth—we’ve tasted this kind of blatant hypocrisy many times before in history.
“The future is already here—it’s just not very evenly distributed,” the science fiction writer William Gibson once said. That unevenly distributed future is networked society. In today’s digital experiment, the world is being transformed into a winner-take-all, upstairs-downstairs kind of society. This networked future is characterized by an astonishingly unequal distribution of economic value and power in almost every industry that the Internet is disrupting. According to the sociologist Zeynep Tufekci, this inequality is “one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of the twenty-first century.20 Like the Battery, it is marketed in the Birches’ feel-good language of inclusion, transparency, and openness; but, like the five-storied pleasure palace, this new world is actually exclusive, opaque, and inegalitarian. Rather than a “public service,” Silicon Valley’s architects of the future are building a privatized networked economy, a society that is a disservice to almost everyone except its powerful, wealthy owners. Like the Battery, the Internet, with its empty promise of making the world a fairer place with more opportunity for more people, has had the unintended consequence of actually making the world less equal and reducing rather than increasing employment and general economic well-being.
Of course, the Internet is not all bad. It has done a tremendous amount of good for society and for individuals, particularly in terms of connecting families, friends, and work colleagues around the world. As a 2014 Pew Report showed, 90% of Americans think that the Web has been good for them personally—with 76% believing it has been good for society.21 It is true that most of the personal lives of the estimated 3 billion Internet users (more than 40% of the world’s population) have been radically transformed by the incredible convenience of email, social media, e-commerce, and mobile apps. Yes, we all rely on and even love our ever-shrinking and increasingly powerful mobile communications devices. It is true that the Internet has played an important and generally positive role in popular political movements around the world—such as the Occupy movement in the United States, or the network-driven reform movements in Russia, Turkey, Egypt, and Brazil. Yes, the Internet—from Wikipedia to Twitter to Google to the excellent websites of professionally curated newspapers like the New York Times and the Guardian—can, if used critically, be a source of great enlightenment. And I certainly couldn’t have written this book without the miracles of email and the Web. And yes, the mobile Web has enormous potential to radically transform the lives of the two and a half billion new Internet users who, according to the Swedish mobile operator Ericsson, will be on the network by 2018. Indeed, the app economy is already beginning to generate innovative solutions to some of the most pervasive problems on the planet—such as mapping clean water stations in Kenya and providing access to credit for entrepreneurs in India.22
But, as this book will show, the hidden negatives outweigh the self-evident positives and those 76% of Americans who believe that the Internet has been good for society may not be seeing the bigger picture. Take, for example, the issue of network privacy, the most persistently corrosive aspect of the “big data” world that the Internet is inventing. If San Francisco is “dystopia by the Bay,” then the Internet is rapidly becoming dystopia on the network.
“We are fans of the village pub where everyone knows everyone,” Michael Birch says. But our networked society—envisioned by Marshall McLuhan as a “global village” in which we return to the oral tradition of the preliterate age—has already become that claustrophobic village pub, a frighteningly transparent community where there are no longer either secrets or anonymity. Everyone, from the National Security Agency to Silicon Valley data companies, does indeed seem to know everything about us already. Internet companies like Google and Facebook know us particularly well—even more intimately, so they boast, than we know ourselves.
No wonder Xochi Birch offers her privileged, wealthy members “privacy” from the data-infested world outside the Battery. In an “Internet of Everything” shadowed by the constant surveillance of an increasingly intelligent network—in a future of smart cars, smart clothing, smart cities, and smart intelligence networks—I’m afraid that the Battery members may be the only people who will be able to afford to escape living in a brightly lit village where nothing is ever hidden or forgotten and where, as data expert Julia Angwin argues, online privacy is already becoming a “luxury good.”23
Winston Churchill was right. We do indeed shape our buildings and thereafter they have the power to shape us. Marshall McLuhan put it slightly differently, but with even more relevance to our networked age. Riffing off Churchill’s 1944 speech, the Canadian media visionary said that “we shape our tools and thereafter our tools shape us.”24 McLuhan died in 1980, nine years before a young English physicist named Tim Berners-Lee invented the World Wide Web. But McLuhan correctly predicted that electronic communication tools would change things as profoundly as Johannes Gutenberg’s printing press revolutionized the fifteenth-century world. These electronic tools, McLuhan predicted, will replace the top-down, linear technology of industrial society with a distributed electronic network shaped by continuous feedback loops of information. “We become what we behold,”25 he predicted. And these networked tools, McLuhan warned, will rewire us so completely that we might be in danger of becoming their unwitting slave rather than their master.
Today, as the Internet reinvents society, the writing is on the wall for us all. Those words on that black marble slab outside the Battery are a chilling preface to the biggest social and economic experiment of our age. None of us—from university professors, photographers, corporate lawyers, and factory laborers to taxi drivers, fashion designers, hoteliers, musicians, and retailers—is immune to the havoc wreaked by this network upheaval. It changes everything.
The pace of this change in our libertarian age is bewilderingly fast—so fast, indeed, that most of us, while enjoying the Internet’s convenience, remain nervous about this “belief system’s” violent impact on society. “Without their permission,” entrepreneurs like Alexis Ohanian crow about a disruptive economy in which a couple of smart kids in a dorm room can wreck an entire industry employing hundreds of thousands of people. With our permission, I say. As we all step into this brave new digital world, our challenge is to shape our networking tools before they shape us.
CHAPTER ONE
THE NETWORK
Networked Society
The wall was dotted with a constellation of flashing lights linked together by a looping maze of blue, pink, and purple lines. The picture could have been a snapshot of the universe with its kaleidoscope of shining stars joined into a swirl of interlinking galaxies. It was, indeed, a kind of universe. But rather than the celestial firmament, it was a graphical image of our twenty-first-century networked world.
I was in Stockholm, at the global headquarters of Ericsson, the world’s largest provider of mobile networks to Internet service providers (ISPs) and telecoms like AT&T, Deutsche Telekom, and Telefonica. Founded in 1876 when a Swedish engineer named Lars Magnus Ericsson opened a telegraph repair workshop in Stockholm, Ericsson had grown by the end of 2013 to employ 114,340 people, with global revenue of over $35 billion from 180 countries. I’d come to meet with Patrik Cerwall, an Ericsson executive in charge of a research group within the company that analyzes trends of what it calls “networked society.” A team of his researchers had just authored the company’s annual Mobility Report, their overview of the state of the global mobile industry. But as I waited in the lobby of the Ericsson office to talk with Cerwall, it was the chaos of connected nodes on the company’s wall that caught my eye.
The map, created by the Swedish graphic artist Jonas Lindvist, showed Ericsson’s local networks and offices around the world. Lindvist had designed the swirling lines connecting cities to represent what he called a feeling of perpetual movement. “Communication is not linear,” he said in explaining his work to me; “it is coincidental and chaotic.” Every place, it seemed, no matter how remote or distant, was connected. With the exception of a symbolic spot for Stockholm in its center, the map was all edge. It had no heart, no organizing principle, no hierarchy. Towns in countries as geographically disconnected as Panama, Guinea Bissau, Peru, Serbia, Zambia, Estonia, Colombia, Costa Rica, Bahrain, Bulgaria, and Ghana were linked on a map that recognized neither time nor space. Every place, it seemed, was connected to everywhere else. The world had been redrawn as a distributed network.
My meeting with Patrik Cerwall confirmed the astonishing ubiquity of today’s mobile Internet. Each year, his Ericsson team publishes a comprehensive report on the state of mobile networks. In 2013, Cerwall told me, there were 1.7 billion mobile broadband subscriptions sold, with 50% of mobile phones acquired that year being smartphones offering Internet access. By 2018, the Ericsson Mobility Report forecasted, mobile broadband subscriptions are expected to increase to 4.5 billion, with the majority of the two and a half billion new subscribers being from the Middle East, Asia, and Africa.1 Over 60% of the world’s more than 7 billion people will, therefore, be online by 2018. And given the dramatic drop in the cost of smartphones, with prices expected to fall to under fifty dollars for high-quality connected devices,2 and the astonishing statistic from a United Nations report that more people had cell phones (6 billion) than had access to a flushing toilet (4.5 billion),3it’s not unreasonable to assume that, by the mid-2020s, the vast majority of adults on the planet will have their own powerful pocket computer with access to the network.
And not just everyone, but everything. An Ericsson white paper predicts that, by 2020, there will be 50 billion intelligent devices on the network.4 Homes, cars, roads, offices, consumer products, clothing, health-care devices, electric grids, even those industrial cutting tools once manufactured in the Musto Steam Marble Mill company, will all be connected on what now is being called the Internet of Things. The number of active cellular machine-to-machine devices will grow 3 to 4 times between 2014 and 2019. “The physical world,” a McKinsey report confirms, “is becoming a type of information system.”5
The economics of this networked society are already staggering. Another McKinsey report studying thirteen of the most advanced industrial economies found that $8 trillion is already being spent through e-commerce. If the Internet were an economic sector, this 2011 report notes, it would have contributed to an average of 3.4% of the world’s gross domestic product in 2009, higher than education (3%), agriculture (2.2%), or utilities (2.1%). And in Jonas Lindvist’s Sweden, that number is almost double, with the Internet making up 6.3% of the country’s 2009 GDP.6
If Lindvist’s graphical map had been a truly literal representation of our networked society, it might have resembled a pointillist painting. The image would have been made up of so many billions of dots that, to the naked eye, they would have merged into a single collective whole. Everything that can be connected is being connected and the amount of data being produced online is mind-boggling. Every minute of every day in 2014, for example, the 3 billion Internet users in the world sent 204 million emails, uploaded 72 hours of new YouTube videos, made over 4 million Google searches, shared 2,460,000 pieces of Facebook content, downloaded 48,000 Apple apps, spent $83,000 on Amazon, tweeted 277,000 messages, and posted 216,000 new Instagram photos.7 We used to talk about a “New York minute,” but today’s “Internet minute” in Marshall McLuhan’s global village makes New York City seem like a sleepy village in which barely anything ever happens.
It may be hard to imagine, especially for those so-called digital natives who have grown up taking the Internet’s networking tools for granted, but the world hasn’t always been a datarich information system. Indeed, three-quarters of a century ago, back in May 1941, when those German bombers blew the British House of Commons to smithereens, nobody and nothing was connected on the network. There weren’t any digital devices able to communicate with one another at all, let alone real-time Twitter or Instagram feeds keeping us in the electronic information loop.
So how did we get from zero to those billions and billions of connected people and things? Where do the origins of the Internet lie?
Forebears
They lie with those Luftwaffe bombers flying at up to 250 miles an hour and at altitudes of over 30,000 feet above London at the beginning of World War II. In 1940, an eccentric Massachusetts Institute of Technology (MIT) professor of mathematics named Norbert Wiener, “the original computer geek,” according to the New York Times,8 began working on a system to track the German aircraft that controlled the skies above London. The son of a Jewish immigrant from Białystok in Poland, Wiener had become so obsessed with lending his scientific knowledge to the war against Germany that he’d been forced to seek psychoanalytical help to control his anti-Nazi fixation.9 Technology could do good, he was convinced. It might even help defeat Hitler.
A math prodigy who graduated from Tufts University at the age of fourteen, received a Harvard doctorate at seventeen, and later studied with Bertrand Russell in Cambridge, Wiener was part of a pioneering group of technologists at MIT that included the electrical engineer and science mandarin Vannevar Bush and the psychologist J. C. R. Licklider. Without quite knowing what they were doing, these men invented many of the key principles of our networked society. What distinguished them, particularly Wiener, was a daring intellectual eclecticism. By defiantly crossing traditional academic disciplines, they were able to imagine and, in some ways, create our connected future.
“From the 1920’s onwards, MIT increasingly attracted the brightest and best of America’s scientists and engineers. In the middle decades of this century, the Institute became a seething cauldron of ideas about information, computing, communications and control,” explains the Internet historian John Naughton. “And when we dip into it seeking the origins of the Net, three names always come up. They are Vannevar Bush, Norbert Wiener and J. C. R. Licklider.”10
In the 1930s, Wiener had been part of the team that worked on Vannevar Bush’s “differential analyser,” a 100-ton electromagnetic analog computer cobbled together out of pulleys, shafts, wheels, and gears and which was designed to solve differential equations. And in 1941 Wiener had even pitched a prototype of a digital computer to Bush, more than five years before the world’s first working digital device, the 1,800-square-foot, $500,000 Electronic Numerical Integrator and Computer (ENIAC), funded by the US Army and described by the press as a “giant brain,” was unveiled in 1946.
But it was the issue of German bombers that obsessed Wiener after the German air force’s massive bombing of London in the fall of 1940. He wasn’t alone in his preoccupation with German aircraft. The US president, Franklin Delano Roosevelt, believed that it had been the overwhelming threat of German airpower that had led to the British appeasement of Hitler at Munich in 1938. So not only did Roosevelt commit the US military to producing ten thousand aircraft per year, but he also set up the National Defense Research Committee (NDRC), directed by Vannevar Bush, who by then had become the president’s chief scientific advisor, to invest in more cooperation between the US government and six thousand of the country’s leading research scientists.
While dean of the School of Engineering at MIT, Bush had set up the Radiation Lab, a group dedicated to figuring out how to enable antiaircraft guns to track and destroy those German bombers in the London sky. Recognizing that computers were potentially more than simply calculating machines, Wiener saw it as an information system challenge and invented a flight path predictor device that relied on a continuous stream of information that flowed back and forth between the gun and its operator. The polymath, with his interest in biology, philosophy, and mathematics, had serendipitously stumbled onto a new science of connectivity. In his eponymous bestselling 1948 book, Wiener called it “Cybernetics,”11 and this new communications theory had a profound influence on everything from Marshall McLuhan’s idea of information loops and J. C. R. Licklider’s work on the symbiosis between man and computer to the mechanics of the Google search engine and the development of artificial intelligence. There may not have been an electronic communications network yet, but the idea of a self-correcting information system between man and machine, “a thing of almost natural beauty that constantly righted its errors through feedback from its environment,” in the words of the technology writer James Harkin,12 was born with Wiener’s revolutionary flight path predictor machine.
While Norbert Wiener’s technical challenge was making sense of scarce information, Vannevar Bush was worried about its overabundance. In September 1945, Bush published an article titled “As We May Think,” in the Atlantic Monthly magazine. The purpose of the essay was to answer the question “What are scientists to do next?” in the postwar age. Rather than making “strange destructive gadgets,” Bush called on American scientists to build thinking machines that would enrich human knowledge.
A seminal essay that was covered as a major news story by both Time and Life magazines on its release and was compared by the Atlantic Monthly editor to Emerson’s iconic 1837 “The American Scholar” address in its historical significance, “As We May Think” offers an introduction to an information network uncannily reminiscent of the World Wide Web. Bush argued that the greatest challenge for his country’s scientists in 1945 was to build tools for the new information age. Modern media products like radio, books, newspapers, and cameras were creating a massively indigestible overload of content. There was too much data and not enough time, he believed, highlighting a problem associated with what contemporary Internet scholars like Michael Goldhaber now call the “attention economy.”
“The summation of human experience is being expanded at a prodigious rate,” Bush explained, “and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of squarerigged ships.”13
At the heart of Bush’s vision was a network of intelligent links. “The process of tying two items together is the important thing,” he said in explaining his idea of organizing content together into what he called “trails,” which, he stressed, would never “fade.” Using new technologies like microphotography and cathode ray tubes, Bush believed that scientists could compress the entire Encyclopaedia Britannica to “the volume of a matchbox” or condense a million-book library into “one end of a desk.” Imagining a machine “which types when talked to” and that acts as a “mechanized private file and library,” Bush called his mechanized information storage device a “Memex.” Describing it as “an enlarged intimate supplement to his memory” that would mimic the “intricate web of trails carried by the cells of the brain,” Bush imagined it as a physical desktop product not unlike a personal computer, and which would have a keyboard, levers, a series of buttons, and a translucent screen.
Along with its remarkable prescience, what is so striking about “As We May Think” is its unadulterated technological optimism. In contrast with Norbert Wiener, who later became an outspoken critic of government investment in scientific and particularly military research and who worried about the impact of digital computers upon jobs,14 Vannevar Bush believed that government investment in science represented an unambiguously progressive force. In July 1945, Bush also wrote an influential paper for President Roosevelt entitled “Science, The Endless Frontier,”15 in which he argued that what he called “the public welfare,” particularly in the context of “full employment” and the role of science in generating jobs, would be improved by government investment in technological research. “One of our hopes is that after the war there will be full employment,” Bush wrote to the president. “To reach that goal, the full creative and productive energies of the American people must be released.”
“As We May Think” reflects this same rather naïve optimism about the economics of the information society. Vannevar Bush insists that everyone—particularly trained professionals like physicians, lawyers, historians, chemists, and a new blogger-style profession he dubbed “trail blazers”—would benefit from the Memex’s automated organization of content. The particularly paradoxical thing about his essay is that while Bush prophesied a radically new technological future, he didn’t imagine that the economics of this information society would be much different from his own. Yes, he acknowledged, compression would reduce the cost of the microfilm version of the Encyclopaedia Britannicato a nickel. But people would still pay for content, he assumed, and this would be beneficial to Britannica’s publishers and writers.
The third member of the MIT trinity of Net forebears was J. C. R. Licklider. A generation younger than Bush and Wiener, Licklider came in 1950 to MIT, where he was heavily influenced by Norbert Wiener’s work on cybernetics and by Wiener’s legendary Tuesday night dinners at a Chinese restaurant in Cambridge, which brought together an eclectic group of scientists and technologists. Licklider fitted comfortably into this unconventional crowd. Trained as a psychologist, mathematician, and physicist, he had earned a doctorate in psychoacoustics and headed up the human engineering group at MIT’s Lincoln Laboratory, a facility that specialized in air defense research. He worked closely with the SAGE (Semi-Automatic Ground Environment) computer system, an Air Force–sponsored network of twenty-three control and radar stations designed to track Russian nuclear bombers. Weighing more than 250 tons and featuring 55,000 vacuum tubes, the SAGE system was the culmination of six years of development, 7,000 man-years of computer programming, and $61 billion in funding. It was, quite literally, a network of machines that one walked into.16
Licklider had become obsessed with computers after a chance encounter at MIT in the mid-1950s with a young researcher named Wesley Clark, who was working on one of Lincoln Labs’s new state-of-the-art TX-2 digital computers. While the TX-2 contained only 64,000 bytes of storage (that’s over a million times smaller than my current 64-gigabyte iPhone 5S), it was nonetheless one of the very earliest computers that both featured a video screen and enabled interactive graphics work. Licklider’s fascination with the TX-2 led him to an obsession with the potential of computing and, like Marshall McLuhan, the belief that electronic media “would save humanity.”17
Licklider articulated his vision of the future in his now-classic 1960 paper, “Man-Computer Symbiosis.” “The hope is that in not too many years, human brains and computing machines will be coupled . . . tightly,” he wrote, “and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”18
Just as Norbert Wiener saw computers as more than calculating devices able to solve differential equations and Vannevar Bush believed they could effectively organize information, Licklider recognized that these new thinking machines were, first and foremost, communications devices. A division of labor between men and computers, he argued, could save us time, refine our democracy, and improve our decision making.
In 1958, Licklider left MIT. He first worked at a Cambridge, Massachusetts–based consulting group called Bolt, Beranek and Newman (BBN). Then, in 1962, he moved to Washington, D.C., where he took charge of both the command and control and the behavioral sciences divisions of the Advanced Research Projects Agency (ARPA), a civilian group established by President Dwight Eisenhower in early 1958 to aggregate the best scientific talent for the public good. At ARPA, where he controlled a government budget of $10 million and came to head up the Information Processing Techniques Office, Licklider’s goal was the development of new programs that used computers as more than simply calculating machines. He gave ARPA contracts to the most advanced computer centers from universities like MIT, Stanford, Berkeley, and UCLA, and established an inner circle of computer scientists that a colleague dubbed “Lick’s Priesthood” and Licklider himself imagined as “The Intergalactic Computer Network.”19
There was, however, one problem with an intergalactic network. Digital computers—those big brains that Licklider called “information-handling machines”—could only handle their own information. Even the state-of-the-art devices like the TX-2 had no means of communicating with other computers. In 1962, computers still did not have a common language. Programmers could share individual computers among each other by “time-sharing,” which allowed them to work concurrently on a single machine. But every computer spoke in its own disparate language and featured software and protocols unintelligible to other computers.
But J. C. R. Licklider’s Intergalactic Computer Network was about to become a reality. The peace that Vannevar Bush welcomed in July 1945 had never really materialized. America had instead quickly become embroiled in a new war—the Cold War. And it was this grand geostrategic conflict with the Soviet Union that created the man-computer symbiosis that gave birth to the Internet.
From Sputnik to the ARPANET
On Friday, October 4, 1957, the Soviet Union launched their Sputnik satellite into earth’s orbit. The Sputnik Crisis, as President Eisenhower dubbed this historic Soviet victory in the space race, shook America’s self-confidence to the core. American faith in its military, its science, its technology, its political system, even its fundamental values was severely undermined by the crisis. “Never before had so small and so harmless an object created such consternation,” observed Daniel Boorstin in The Americans, writing about the loss of national self-confidence and self-belief that the crisis triggered.20
But along with all the doom and gloom, Sputnik also sparked a renaissance in American science, with the government’s research and development budget rising from $5 billion in 1958 to more than $13 billion annually between 1959 and 1964.21 ARPA, for example, with its initial $520 million investment and $2 billion budget plan, was created by President Eisenhower in the immediate aftermath of the crisis as a way of identifying and investing in scientific innovation.
But rather than innovation, the story of the Internet begins with fear. If the Soviets could launch such an advanced technology as Sputnik into space, then what was to stop them from launching nuclear missiles at the United States? This paranoia of a military apocalypse, “the specter of wholesale destruction,” as Eisenhower put it, so brilliantly satirized in Stanley Kubrick’s 1964 movie, Dr. Strangelove, dominated American public life after the Sputnik launch. “Hysterical prophesies of Soviet domination and the destruction of democracy were common,” noted Katie Hafner and Matthew Lyon in Where Wizards Stay Up Late, their lucid history of the Internet’s origins. “Sputnik was proof of Russia’s ability to launch intercontinental ballistic missiles, said the pessimists, and it was just a matter of time before the Soviets would threaten the United States.”22
The Cold War was at its chilliest in the late fifties and early sixties. In 1960, the Soviets shot down an American U-2 surveillance plane over the Urals. On August 17, 1961, the Berlin Wall, the Cold War’s most graphic image of the division between East and West, was constructed overnight by the German Democratic Republic’s communist regime. In 1962, the Cuban Missile Crisis sparked a terrifying contest of nuclear brinksmanship between Kennedy and Khrushchev. Nuclear war, once unthinkable, was being reimagined as a logistical challenge by game theorists at military research institutes like the RAND Corporation, the Santa Monica, California–based think tank set up by the US Air Force in 1964 to “provide intellectual muscle”23 for American nuclear planners.
By the late 1950s, as the United States developed hair-trigger nuclear arsenals that could be launched in a matter of minutes, it was becoming clear that one of the weakest links in the American military system lay with its long-distance communications network. Kubrick’s Dr. Strangelove had parodied a nuclear-armed America where the telephones didn’t work, but the vulnerability of its communications system to military attack wasn’t really a laughing matter.
As Paul Baran, a young computer consultant at RAND, recognized, America’s analog long-distance telephone and telegraph system would be one of the first targets of a Soviet nuclear attack. It was a contradiction worthy of Joseph Heller’s great World War II novel Catch-22. In the event of a nuclear attack on America, the key response should come from the president through the country’s communications system. Yet such a response would be impossible, Baran realized, because the communications system itself would be one the first casualties of any Soviet attack.
The real issue, for Baran, was making America’s long-distance communications network invulnerable against a Soviet nuclear attack. And so he set about building what he called “more survivable networks.” It certainly was an audacious challenge. In 1959, the thirty-year-old, Polish-born Baran—who had only just started as a consultant at RAND, having dropped out of UCLA’s doctoral program in electric engineering after he couldn’t find a parking spot one day on its Los Angeles campus24—set out to rebuild the entire long-distance American communications network.
This strange story has an even stranger ending. Not only did Baran succeed in building a brilliantly original blueprint for this survivable network, but he also accidentally, along the way, invented the Internet. “The phrase ‘father of the Internet’ has become so debased with over-use as to be almost meaningless,” notes John Naughton, “but nobody has a stronger claim to it than Paul Baran.”25
Baran wasn’t alone at RAND in recognizing the vulnerability of the nation’s long-distance network. The conventional RAND approach to rebuilding this network was to invest in a traditional top-down hardware solution. A 1960 RAND report, for example, suggested that a nuclear-resistant buried cable network would cost $2.4 billion. But Baran was, quite literarily, speaking another language from the other analysts at RAND. “Many of the things I thought possible would tend to sound like utter nonsense, or impractical, depending on the generosity of spirit in those brought up in an earlier world,”26 he acknowledged. His vision was to use digital computer technology to build a communications network that would be invulnerable to Soviet nuclear attack. “Computers were key,” Hafner and Lyon write about Baran’s breakthrough. “Independently of Licklider and others in computer’s avant-garde, Baran saw well beyond mainstream computing, to the future of digital technologies and the symbiosis between humans and machines.”27
Digital technologies transform all types of information into a series of ones and zeros, thus enabling computer devices to store and replicate information with perfect accuracy. In the context of communications, digitally encoded information is much less liable to degrade than analog data. Baran’s computer-to-computer solution, which he viewed as a “public utility,”28 was to build a digital network that would radically change the shape and identity of the preexisting analog system. Based on what he called “user-to-user rather than . . . center-to-center operation,”29 this network would be survivable in a nuclear attack because it wouldn’t have a heart. Rather than being built around a central communication switch, it would be what he called a “distributed network” with many nodes, each connected to its neighbor. Baran’s grand design, articulated in his 1964 paper “On Distributed Communications,” prefigures the chaotic map that Jonas Lindvist would later design for Ericsson’s office. It would have no heart, no hierarchy, no central dot.
The second revolutionary aspect of Baran’s survivable system was its method for communicating information from computer to computer. Rather than sending a single message, Baran’s new system broke up this content into many digital pieces, flooding the network with what he called “message blocks,” which would travel arbitrarily across its many nodes and be reassembled by the receiving computer into readable form. Coined as “packet switching” by Donald Davies, a government-funded information scientist at Britain’s National Physical Laboratory, who had serendipitously been working on a remarkably similar set of ideas, the technology was driven by a process Baran called “hot potato routing,” which rapidly sent packets of information from node to node, guaranteeing the security of the message from spies.
“We shape our tools and thereafter our tools shape us,” McLuhan said. And, in a sense, the fate of Baran’s grand idea on computer-to-computer communication that he developed in the early 1960s mirrored the technology itself. For a few years, bits and pieces of his ideas pinged around the computer science community. And then, in the midsixties, they were reassembled back at ARPA.
J. C. R. Licklider, who never stayed in a job more than a few years, was long gone, but his idea of “the Intergalactic Computer Network” remained attractive to Bob Taylor, an ex–NASA computer scientist, who now was in charge of ARPA’s Information Processing Techniques Office. As more and more scientists around America were relying on computers for their research, Taylor recognized that there was a growing need for these computers to be able to communicate with one another. Taylor’s concerns were more prosaic than an imminent Russian nuclear attack. He believed that computer-to-computer communication would cut costs and increase efficiency within the scientific community.
At the time, computers weren’t small and they weren’t cheap. And so one day in 1966, Taylor pitched the ARPA director, Charles Herzfeld, on the idea of connecting them.
“Why not try tying them all together?” he said.
“Is it going to be hard to do?” Herzfeld asked.
“Oh no. We already know how to do it,” Taylor promised.
“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”30
And Taylor did indeed
