Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
Internet entrepreneur Andrew Keen was among the earliest to write about the dangers that the Internet poses to our culture and society. His 2007 book The Cult of the Amateur was critical in helping advance the conversation around the Internet, which has now morphed from a tool providing efficiencies and opportunities for consumers and business to a force that is profoundly reshaping our societies and our world. In his new book, How to Fix the Future, Keen focuses on what we can do about this seemingly intractable situation. Looking to the past to learn how we might change our future, he describes how societies tamed the excesses of the Industrial Revolution, which, like its digital counterpart, demolished long-standing models of living, ruined harmonious environments and altered the business world beyond recognition. Travelling across the globe, from India to Estonia, Germany to Singapore, he investigates the best (and worst) practices in five key areas - regulation, innovation, social responsibility, consumer choice and education - and concludes by examining whether we are seeing the beginning of the end of the America-centric digital world. Powerful, urgent and deeply engaging, How to Fix the Future vividly depicts what we must do if we are to try to preserve human values in an increasingly digital world and what steps we might take as societies and individuals to make the future something we can again look forward to.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 429
Veröffentlichungsjahr: 2018
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
ALSO BY ANDREW KEEN
The Cult of the Amateur: How Today’s Internet Is Killing Our Culture
Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us
The Internet Is Not the Answer
First published in hardback in the United States of America in 2018 by Atlantic Monthly Press, an imprint of Grove Atlantic, Inc.
First published in hardback in Great Britain in 2018 by Atlantic Books, an imprint of Atlantic Books Ltd.
Copyright © Andrew Keen, 2018
The moral right of Andrew Keen to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act of 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of both the copyright owner and the above publisher of this book.
1 2 3 4 5 6 7 8 9
A CIP catalogue record for this book is available from the British Library.
Hardback ISBN: 978-1-78649-164-0
Trade Paperback ISBN: 978-1-78649-166-4
Paperback ISBN: 978-1-78649-168-8
E-book ISBN: 978-1-78649-167-1
Printed in Great Britain
Atlantic Books
An imprint of Atlantic Books Ltd
Ormond House
26–27 Boswell Street
London
WC1N 3JZ
www.atlantic-books.co.uk
For our kids
Preface: An Internet of People
Introduction: We’ve Been Here Before
1 More’s Law
2 Five Tools for Fixing the Future
3 What Is Broken
4 Utopia: A Case Study (Book One)
5 Utopia: A Case Study (Book Two)
6 Regulation
7 Competitive Innovation
8 Social Responsibility
9 Worker and Consumer Choice
10 Education
Conclusion: Our Kids
Acknowledgments
Notes
Index
“The Senate also has a standing rule never to debate a matter on the same day that it is first introduced but to put it off till the next morning. This they do so that a man will not blurt out the first thought that occurs to him, and then devote all his energies to defending his own proposals, instead of considering the common interest. They know that some men have such a perverse and preposterous sense of shame that they would rather jeopardize the general welfare than their own reputation by admitting they were short-sighted in the first place. They should have had enough foresight at the beginning to speak with consideration rather than haste.”
—Thomas More, Utopia1
Having spent the last decade writing critically about the digital revolution, I’ve been called everything from a Luddite and a curmudgeon to the “Antichrist of Silicon Valley.” At first I was part of a small group of dissenting authors who challenged the conventional wisdom about the internet’s beneficial impact on society. But over the last few years, as the zeitgeist has zigged from optimism to pessimism about our technological future, more and more pundits have joined our ranks. Now everyone, it seems, is penning polemics against surveillance capitalism, big data monopolists, the ignorance of the online crowd, juvenile Silicon Valley billionaires, fake news, antisocial social networks, mass technological unemployment, digital addiction, and the existential risk of smart algorithms. The world has caught up with my arguments. Nobody calls me the Antichrist anymore.
Timing—as I know all too well from my day job as a serial entrepreneur of mostly ill-timed start-ups—is everything. Having written three books exposing the dark side of the digital revolution, I think the time is now right for something more positive. So, rather than yet another noxious screed against contemporary technology, this book offers what I hope are constructive answers to the myriad questions on the digital horizon. To borrow a fashionable Silicon Valley word, this represents a pivot in my writing career. What you are about to read is a solutions book. It’s obvious that the future needs to be fixed. The question now is how to fix it.
This is also a people book. What I’ve tried to write is a human narrative. It’s the story of how people all over the world—from Estonia and Singapore to India, Western Europe, the United States, and beyond—are trying to solve the great challenges of our digital age. “Out of the crooked timber of humanity,” the eighteenth-century German philosopher Immanuel Kant suggested, “no straight thing was ever made.” But there is one straight thing about all the people described in this book. Although there might not be any single solution, any magic bullet for creating an ideal network society, what unites all these people is their determination—what I call “agency”—to shape their own fate in the face of technological forces that often seem both uncontrollable and unaccountable.
There is today much hype, some of it justified, about the “internet of things”—the network of smart objects that is the newest new thing in Silicon Valley. Rather than an internet of things, however, this book showcases an internet of people. I show that instead of smart technology, it’s smart human beings, acting as they’ve always done throughout history—as innovators, regulators, educators, consumers, and, above all, as engaged citizens—who are fixing the twenty-first-century future. At a time when our traditional notion of “humanity” is threatened by artificial intelligence (AI) and other smart technologies, it’s this humanist verity—the oldest of old things—that is the central message of the book.
There is, however, nothing inevitable about a global network of people successfully fixing the future. The issues that confront us are urgent and complex. Time is many things, but it isn’t infinite, at least not for us humans. The digital clock, which seems to proceed at a more accelerated pace than its analog forebear, is already ticking furiously. Unless we act now, we increasingly risk becoming powerless appendages to the new products and platforms of Big Tech corporations. This book, then, is also a call to arms in a culture infected by a creeping (and creepy) technological determinism. And it’s a reminder that our own human agency—our timeless responsibility to shape our own societies—is essential if we are to build a habitable digital future.
In contrast with smart cars, the future will never be able to drive itself. None of us, not even the Antichrist of Silicon Valley, have superhuman powers. But by working together, as we’ve done throughout history, we can build a better world for our children. This book is dedicated to them. They are why the future matters.
Andrew KeenBerkeley, CaliforniaJuly 2017
The future, it seems, is broken. We are caught between the operating systems of two quite different civilizations. Our old twentieth-century system doesn’t work anymore, but its replacement, a supposedly upgraded twenty-first-century version, isn’t functioning properly either. The signs of this predicament are all around us: the withering of the industrial economy, a deepening inequality between rich and poor, persistent unemployment, a fin-de-siècle cultural malaise, the unraveling of post–Cold War international alliances, the decline of mainstream media, the dwindling of trust in traditional institutions, the redundancy of traditional political ideologies, an epistemological crisis about what constitutes “truth,” and a raging populist revolt against the establishment. And while we are all too familiar with what is broken, we don’t seem to know how we can get anything to work anymore.
What is causing this great fragmentation? Some say too much globalization, others say not enough. Some blame Wall Street and what they call the “neoliberalism” of free market monetary capitalism, with its rapacious appetite for financial profit. Then there are those who see the problem in our new, unstable international system—for instance, the cult-of-personality authoritarianism in Russia, which they say is destabilizing Europe and America with a constant barrage of fake news. There’s the xenophobic reality television populism of Donald Trump and the success of the Brexit plebiscite in the United Kingdom—although sometimes it’s hard to tell if these are causes or effects of our predicament. What is clear, however, is that our twentieth-century elites have lost touch with twenty-first-century popular sentiment. This crisis of our elites explains not only the scarcity of trust bedeviling most advanced democracies but also the populist ressentiment on both left and right, against the traditional ruling class. Yet it also feels as if we are all losing touch with something more essential than just the twentieth-century establishment. Losing touch with ourselves, perhaps. And with what it means to be human in an age of bewilderingly fast change.
As Steve Jobs used to say, teasing his audience before unveiling one of Apple’s magical new products, there’s “one more thing” to talk about here. And it’s the biggest thing of all in our contemporary world. It is the digital revolution, the global hyperconnectivity powered by the internet, that lies behind much of the disruption.
In 2016, I participated in a two-day World Economic Forum (WEF) workshop in New York City about the “digital transformation” of the world. The event’s focus was on what it called the “combinatorial effects” of all these new internet-based technologies—including mobile, cloud, artificial intelligence, sensors, and big data analytics. “Just as the steam engine and electrification revolutionized entire sectors of the economy from the eighteenth century onward,” the seminar concluded, “modern technologies are beginning to dramatically alter today’s industries.”1 The economic stakes in this great transformation are dizzying. Up to $100 trillion can be realized in the global economy by 2025 if we get the digital revolution right, the WEF workshop promised.
And it’s not only industry that is being dramatically changed by these digital technologies. Just as the industrial revolution transformed society, culture, politics, and individual consciousness, so the digital revolution is changing much about twenty-first-century life. What’s at stake here is worth considerably more than just $100 trillion. Today’s structural unemployment, inequality, anomie, mistrust, and the populist rage of our anxious times are all, in one way or another, a consequence of this increasingly frenetic upheaval. Networked technology—enabled in part by Jobs’s greatest invention, the iPhone—in combination with other digital technologies and devices, is radically disrupting our political, economic, and social lives. Entire industries—education, transportation, media, finance, health care, and hospitality—are being turned upside down by this digital revolution. Much of what we took for granted about industrial civilization—the nature of work, our individual rights, the legitimacy of our elites, even what it means to be human—is being questioned in this new age of disruption. Meanwhile, Silicon Valley is becoming the West Coast version of Wall Street, with its multibillionaire entrepreneurs taking the role of the new masters of the universe. In 2016, for example, tech firms gave out more stock-based compensation than Wall Street paid out in bonuses.2 So, yes, our new century is turning out to be the networked century. But, to date, at least, it’s a time of ever-deepening economic inequality, job insecurity, cultural confusion, political chaos, and existential fear.
We’ve been here before, of course. As the “digital transformation” WEF workshop reminds us, a couple of hundred years ago the similarly disruptive technology of the industrial revolution turned the world upside down, radically reinventing societies, cultures, economies, and political systems. The nineteenth-century response to this great transformation was either a yes, a no, or a maybe to all this bewildering change.
Reactionaries, mostly Luddites and romantic conservatives, wanted to destroy this new technological world and return to what appeared to them, at least, to be a more halcyon era. Idealists—including, ironically enough, both uncompromisingly free market capitalists and revolutionary communists—believed that the industrial technology would, if left to unfold according to its own logic, eventually create a utopian economy of infinite abundancy. And then there were the reformers and the realists—a broad combination of society, including responsible politicians on both the left and the right, businesspeople, workers, philanthropists, civil servants, trade unionists, and ordinary citizens—who focused on using human agency to fix the many problems created by this new technology.
Today we can see similar responses of yes, no, or maybe to the question of whether the dramatic change swirling all around us is to our benefit. Romantics and xenophobes reject this globalizing technology as somehow offending the laws of nature, even of “humanity” itself (an overused and under-defined word in our digital age). Both Silicon Valley techno-utopians and some critics of neoliberalism insist that the digital revolution will, once and for all, solve all of society’s perennial problems and create a cornucopian postcapitalist future. For them, much of this change is inevitable—“The Inevitable”3 according to one particularly evangelical determinist. And then there are the maybes, like myself—realists and reformers rather than utopians or dystopians—who recognize that today’s great challenge is to try to fix the problems of our great transformation without either demonizing or lionizing technology.
This is a maybe book, based on the belief that the digital revolution can, like the industrial revolution, be mostly successfully tamed, managed, and reformed. It hopes that the best features of this transformation—increased innovation, transparency, creativity, even a dose of healthy disruption—might make the world a better place. And it outlines a series of legislative, economic, regulatory, educational, and ethical reforms that can, if implemented correctly, help fix our common future. Just as the digital revolution is being driven by what that WEF workshop called the “combinatorial effects” of several networked technologies, solving its many problems requires an equally combinatorial response. As I’ve already argued, there is no magic bullet that can or will ever create the perfect society—digital or otherwise. So relying on a single overriding solution—a perfectly free market, for example, or ubiquitous government regulation—simply won’t work. What’s needed, instead, is a strategy combining regulation, civic responsibility, worker and consumer choice, competitive innovation, and educational solutions. It was this multifaceted approach that eventually fixed many of the most salient problems of the industrial revolution. And today we need an equally combinatorial strategy if we are to confront the many social, economic, political, and existential challenges triggered by the digital revolution.
Maybe we can save ourselves. Maybe we can better ourselves. But only maybe. My purpose in this book is to draw a map that will help us find our way around the unfamiliar terrain of our networked society. I traveled several hundred thousand miles to research that map—flying from my home in Northern California to such faraway places as Estonia, India, Singapore, and Russia, as well as to several Western European countries and many American cities outside California. And I interviewed close to a hundred people in the many places I visited—including presidents, government ministers, CEOs of tech start-ups, heads of major media companies, top antitrust and labor lawyers, European Union commissioners, leading venture capitalists, and some of the most prescient futurists in the world today. The wisdom in this book is theirs. My role is simply to join the dots in the drawing of a map that they have created with their actions and ideas.
One of the most prescient people at the 2016 WEF workshop was Mark Curtis, a serial start-up entrepreneur, writer, and design guru who is also cofounder of Fjord, a London-based creative agency owned by the global consultancy firm Accenture. “We need an optimistic map of the future which puts humans in its center,” Curtis said to me when I later visited him at the Fjord office near Oxford Circus in London’s West End. It’s a map, he explained, that should provide guidance for all of us about the future—establishing in our minds the outlines of an unfamiliar place so that we can navigate our way around this new terrain.
This book, I hope, is that map. From old carpet factories in Berlin to gentlemen’s colonial clubs in Bangalore to lawyers’ offices in Boston to the European Commission headquarters in Brussels and beyond, How to Fix the Future offers a new geography of how regulators, innovators, educators, consumers, and citizens are fixing the future. But there’s no Uber or Lyft–style service that can whisk us, with the click of a mouse or the swipe of a finger, into the future. No, not even the smartest technology can solve technological problems. Only people can. And that’s what this book is about. It is the story of how some people in some places are solving the thorniest problems of the digital age. And how their example can inspire the rest of us to do so too.
This nineteenth-century room is full of twenty-first-century things. The room itself—the entire top floor of what was once a Berlin factory—is decrepit, its brick walls shorn of paint, its wooden floors splintered, the pillars holding up its low ceiling chipped and cracked. The four-story brick building, one of Berlin’s few remaining nineteenth-century industrial monuments, is named the Alte Teppichfabrik (the Old Carpet Factory). But, like so much else of old Berlin, this industrial shell is now filled with new people and new technology. This crowd of investors, entrepreneurs, and technologists are all staring at a large electronic screen in front of them. It is broadcasting the image of a bespectacled young man with a pale, unshaven face, staring intently into a camera. Everyone in the room is watching him speak. They are all listening raptly to the most notorious person in cyberspace.
“What we are losing is a sense of agency in our societies,” he tells them. “That’s the existential threat we all face.”
The whole spectacle—the dilapidated room, the mesmerized audience, the pixelated face flickering on the giant screen—recalls for me one of television’s most iconic commercials, the Super Bowl XVIII slot for the Apple Macintosh computer. In this January 1984 advertisement for the machine that launched the personal computer age, a man on a similarly large screen in a similarly decrepit room addresses a crowd of similarly transfixed people. But in the Macintosh commercial the man is a version of Big Brother, the omniscient tyrant from Orwell’s twentieth-century dystopian novel Nineteen Eighty-Four. The young man on the Berlin screen, in contrast, is an enemy of authoritarianism. He is someone who, at least in his own mind, is a victim rather than a perpetrator of tyranny.
His name is Edward Snowden. A hero to some and a traitorous hacker to others, he is the former CIA contractor who, having leaked classified information about a series of US government surveillance programs, fled to Vladimir Putin’s Russia and now mostly communicates with the outside world through cyberspace.
The Berlin audience has come to the old carpet factory for a tech event titled “Encrypted and Decentralized,” organized by the local venture firm BlueYard Capital. Its purpose—like that of this book—is to figure out how to fix the future. “We need to encode our values not just in writing but in the code and structure of the internet,” the invitation to the event had said. Its goal is to insert our morality into digital technology so that the internet reflects our values.
Snowden’s electronic face on the Berlin screen is certainly a portrait of human defiance. Staring directly at his German audience, he repeats himself. But this time, rather than an observation about our collective powerlessness, his message is more like a call to arms.
“Yes, what we are losing,” he confirms, “is a sense of agency in our society.”
It is perhaps appropriate that he should be offering these thoughts from cyberspace. The word “cyberspace” was coined by the science fiction writer William Gibson in his 1984 novel Neuromancer and was invented to describe a new realm of communication among personal computers such as the Apple Macintosh. Gibson adapted it from the word “cybernetics,” a science of networked communications invented by the mid-twentieth-century Massachusetts Institute of Technology (MIT) mathematician Norbert Wiener. And Wiener named his new science of connectivity after the ancient Greek word kybernetes, meaning a steersman or a pilot. It was no coincidence that Wiener—who, along with fellow MIT alumni Vannevar Bush and J.C.R. Lick-lider,1 is considered a father of the internet—chose to name his new science after kybernetes. Networked technology, Wiener initially believed, could steer or pilot us to a better world. This assumption, which Wiener shared not only with Bush and Licklider, but with many other twentieth-century visionaries—including Steve Jobs and Steve Wozniak, the cofounders of Apple—was based on the conviction that this new technology would empower us with agency to change our societies. “You’ll see why 1984 won’t be like ‘1984,’” promised the iconic Super Bowl XVIII advertisement about the transformative power of Jobs’s and Wozniak’s new desktop computer.
But Edward Snowden’s virtual speech at the Alte Teppichfabrik doesn’t share this optimism. Communicating in cyberspace, presumably from a Russian safe house a couple of thousand miles east of the German capital, Snowden is warning his Berlin audience that contemporary technology—the power of the network, in an age of ubiquitous computing, to snoop on and control everything we do—is undermining our power to govern our own society. Rather than a steersman, it has become a jailor.
“Individual privacy is the right to the self. It’s about power. It’s about the need to protect our reputation and be left alone,” Snowden tells the Berlin audience from cyberspace. In this nineteenth-century room, he is articulating a classic nineteenth-century sensibility about the inviolability of the self.
From somewhere in Putin’s Russia, Edward Snowden poses a question to his Berlin audience to which he knows the answer. “What does it mean,” he asks, “when we are all transparent and have no secrets anymore?”
In Snowden’s mind, at least, it means that we don’t exist anymore. Not in the way that nineteenth-century figures, like William Wordsworth or Henry James, regarded our intrinsic right to privacy.2 It’s the same argument that two American lawyers, Samuel Warren and Louis Brandeis, made in their now iconic 1890 Harvard Law Review article, “The Right to Privacy.” Written as a reaction to the then radically disruptive new technology of photography, the Boston-based Warren and Brandeis (who would later become a US Supreme Court justice) argued that “solitude and privacy have been more essential to the individual.” The right to “be let alone,” they thus wrote, was a “general right to the immunity of the person . . . The right to one’s personality.”3
So how do we restore nineteenth-century values to twenty-first-century life? How can agency be reinvented in the digital age?
At the climax of that 1984 advertisement for the Macintosh, a vigorous blonde in red-and-white workout gear bursts into the decrepit room and, hurling a hammer at the screen, blows up the image of Big Brother. She isn’t a Luddite, of course; the whole point of this one-minute Super Bowl slot, after all, was to convince its millions of viewers to spend $2,500 on a new personal computer. But the Apple commercial does remind us, albeit through Madison Avenue’s Technicolor-tinted lenses, about the central role of human agency in changing the world and in keeping us safe from those who would take away our rights.
The issue the virtual Edward Snowden is raising with his Berlin audience is also the central question in this book. How can we reassert our agency over technology? How do we become like that vigorous blonde in the Macintosh advertisement and once again make ourselves the pilots of our own affairs?
Edward Snowden is right. The future isn’t working. There’s a hole in it. Over the last fifty years we’ve invented transformational new technologies—including the personal computer, the internet, the World Wide Web, artificial intelligence, and virtual reality—that are transforming our society. But there is one thing that’s missing from this data-rich world. One thing that’s been omitted from the new operating system.
Ourselves. We are forgetting about our place, the human place, in this twenty-first-century networked world. That’s where the hole is. And the future, our future, won’t be fixed until we fill it.
Everything is getting perpetually upgraded except us. The problem is there’s no human version of Moore’s Law, the 1965 prediction by Intel cofounder Gordon Moore that the processing power of silicon chips would double about every eighteen months.4 Today, half a century after Gordon Moore described the phenomenon that would later be named Moore’s Law,5 it remains the engine driving what the Pulitzer Prize–winning author Thomas Friedman calls our “age of acceleration.”6 So, yes, that iPhone in your pocket may be unrecognizably faster, and more connected, powerful, and intelligent, than its predecessor, the once-revolutionary Apple Macintosh personal computer, let alone a mid-sixties multimillion-dollar mainframe machine that required its own air-conditioned room to operate. But in spite of promises about the imminent merging of man and computer by prophets of the “Singularity”—such as Google’s chief futurist, Ray Kurzweil, who still insists that this synthesis will inevitably happen by 2029—we humans, for the moment at least, are no speedier, no smarter, and, really, no more self-aware than we were back in 1965.
What Friedman euphemistically dubs a “mismatch” between technology and humanity is, he says, “at the center of much of the turmoil roiling politics and society in both developed and developing countries today . . . [and] now constitutes probably the most important governance challenge across the globe.”7 As Joi Ito, the director of the MIT Media Lab, warns, when everything is moving quickly except us, the consequence is a social, cultural, and economic “whiplash.”8
Few people have given this asymmetry more thought than the philosopher whom Thomas Friedman acknowledges as his “teacher” in these matters, Dov Seidman, author of How and the CEO of LRN, which advises companies on ethical behavior, culture, and leadership.9,10
Seidman reminds us that “there’s no Moore’s Law for human progress” and that “technology can’t solve moral problems.” Most of all, however, he has taught me in our numerous conversations that the hyperconnected twenty-first-century world hasn’t just changed, but has been totally reshaped. And since this reshaping has occurred faster than we have reshaped ourselves, Seidman says, we now need to play “moral catch-up.”
Seidman describes a computer as a “brain outside of ourselves,” our “second brain.” But, he warns, from an evolutionary standpoint, there’s been what he calls an “exponential leap,” and this new brain has outpaced our heart, our morality, and our beliefs. We have become so preoccupied looking down at our second brains, he warns, that we are forgetting how to look smartly at ourselves. As these devices get faster and faster, we appear to be standing still; as they produce more and more data about us, we aren’t getting any more intelligent; as they become more and more powerful, we might even be losing control of our own lives. Instead of the Singularity, we may actually be on the brink of its antithesis—let’s call it the “Duality”—an ever-deepening chasm between humans and smart machines and also between tech companies and the rest of humanity.
Yes, Dov Seidman is right. Moore’s Law is, indeed, unmooring us. It feels as if we are drifting toward a world that we neither quite understand nor really want. And as this sense of powerlessness increases, so does our lack of trust in our traditional institutions. The 2017 Edelman Trust Barometer, the gold standard for measuring trust around the world, recorded the largest-ever drop in public trust toward public institutions. Trust in media, government, and our leaders all fell precipitously across the world, with trust in media being, for example, at an all-time low in seventeen countries. According to Richard Edelman, the president and CEO of Edelman, the implosion of trust has been triggered by the 2008 Great Recession, as well as by globalization and technological change.11 This trust scarcity is the “great question of our age,” Edelman told me when I visited him at his New York City office.
It seems paradoxical. On the one hand, the digital revolution certainly has the potential to enrich everyone’s life in the future; on the other, it is actually compounding today’s economic inequality, unemployment crisis, and cultural anomie. The World Wide Web was supposed to transform mankind into One Nation, what the twentieth-century Canadian new media guru Marshall McLuhan called, not without irony, a global village. But today’s Duality isn’t just limited to the chasm between humans and computers—it’s also an appropriate epithet for the growing gap between the rich and the poor, between the technologically overburdened and the technologically unemployed, between the analog edge and the digital center.
Just as at other radically disruptive moments in history, we are living simultaneously in the most utopian and dystopian of times. Technophiles promise us an abundant digital future; Luddites, in contrast, warn of an imminent techno-apocalypse. But the real problem lies with ourselves rather than with our new operating system. So the first step in fixing the future is to avoid the trap of either idealizing or demonizing technology. The second step is much trickier. It’s remembering who we are. If we want to control where we are going, we must remember where we’ve come from.
There’s one more paradox. Yes, everything might seem to be changing, but in other ways, nothing has really changed at all. We are told that we are living through an unprecedented revolution—the biggest event in human history, according to some; an existential threat to the species, according to others. Which may be true in some senses, although we’ve heard the same sort of dire warnings in the past. Back in the nineteenth century, for example, similar warnings were made by romantics like the poet William Blake about the catastrophic impact on humanity of what he called the “dark Satanic mills.” The future has, indeed, been both broken and fixed many times before in history. That’s the story of mankind. We break things and then we fix them in the same way that we always have—through the work of legislators, innovators, citizens, consumers, and educators. That’s the human narrative. And the issues that have always been most salient during previous social, political, and economic crises—the exaggerated power and wealth of elites, economic monopolies, excessively weak or strong government, the impact of unregulated markets, mass unemployment, the undermining of individual rights, cultural decay, the disappearance of public space, the existential dilemma of what it means to be human—are the same today as they’ve always been.
History is, indeed, full of such moments. In December 1516, for example, a little book was published in Louvain, today a university town in Belgium, then part of the Spanish Netherlands. This book came into a world that was in the midst of even more economic disruption and existential uncertainty than our own. The assumptions of the traditional feudal world were being challenged from every imaginable angle. Economic inequality, mass unemployment, and a millenarian angst were all endemic. The Polish astronomer Nicolaus Copernicus had just stumbled on the almost unspeakable realization that our planet wasn’t the center of the universe. The democratizing technology of Johannes Gutenberg’s printing press was undermining the centuries-old authority of the Catholic clergy. Most disorientating of all, populist preachers such as Martin Luther had invented the terrifying new theology of predestination that presented a Christian God of such infinite and absolute power that humans no longer had any free will or agency to determine their own fates. For many sixteenth-century folk, therefore, the future appeared profoundly broken. New cosmology and theology seemed to have transformed them into footnotes. They couldn’t imagine a place for themselves, as masters of their own destiny, in this new world.
That little book might, in part at least, have been intended to fix the future and reestablish man’s confidence in his own agency. It wasn’t much more than a pamphlet, written by a persecutor of heretics and a Christian saint, a worldly lawyer and an aspiring monk, a landowner and the conscience of the landless, a vulgar medieval humorist and a subtle classical scholar, a Renaissance humanist and a hair-shirted Roman Catholic, someone who was both an outspoken defender and an implicit critic of the old operating system of sixteenth-century Europe.
His name was Thomas More, and the book, written in Latin, was called Utopia—which can be translated into English as “No Place” or “Perfect Place.” More imagined an island outside time and space, a simultaneously dreamlike and nightmarish one-nation kind of place featuring a highly regulated economy, full employment, the complete absence of personal privacy, relative equality between men and women, and an intimate trust between ruler and ruled. In More’s Utopia, there were no lawyers, no expensive clothes, no frivolities of any kind. This no-place was—and still is—a provocation, a place forever on the horizon, an eternal challenge to the establishment, the most seductive of promises, and a dire warning.
Today, on its five-hundredth anniversary, we are told that this idea of Utopia is making a “comeback.”12 But the truth is that More’s creation never truly went away. Utopia’s universal relevance is based on both its timelessness and its timeliness. And as we drift from an industrial toward a networked society, the big issues that More raises in his little book—the intimate relationship between privacy and individual freedom, how society should provide for its citizens, the central role of work in a good society, the importance of trust between ruler and ruled, and the duty of all individuals to contribute to and improve society—remain as pertinent today as they’ve ever been.
The Irish playwright Oscar Wilde captured this timelessness in 1891 when discussing the then-new operating system of industrial capitalism. “A map of the world that does not include Utopia is not even worth glancing at, for it leaves out the one country at which Humanity is always landing. And when Humanity lands there, it looks out, and, seeing a better country, sets sail,” Wilde wrote in “The Soul of Man Under Socialism,” his moral critique of what he considered to be the immoral factories and slaughterhouses of industrial society.13
So what was the core message—“More’s Law,” so to speak—buried in this enigmatic sixteenth-century text?
It’s a question that has preoccupied generations of thinkers. Some see More as being nostalgic for a feudal commons that protected the so-called commonwealth of the traditional medieval community. Progressives like Oscar Wilde view the little book as a moral critique of nascent capitalism, while conservatives see it as a savage satire of agrarian communism. And then there are those—remembering More’s close friendship with the Dutch humanist theologian Erasmus of Rotterdam, the seriocomic author of In Praise of Folly—who see the book as little more than an extended practical joke, the cleverest of humanist follies.
All these different interpretations have sought clues in a text that is defiantly elusive. But there is another, quite different, way of looking at it. There were four editions of Utopia published between 1516 and 1518, the first in Louvain, the second in Paris, and the third and fourth—which, according to historians, were closest to More’s intent—in the Swiss city of Basel.14 The most striking difference between the first and last versions lies in the imaginary map of Utopia that visualizes the invented island. The Basel editions contain an elaborate map, commissioned by Erasmus and designed, most likely, by the Renaissance artist Hans Holbein the Younger, who is best known now for his 1533 painting The Ambassadors, a humanist masterpiece that, in its surreal dissonance, captures the sense of crisis pervading the period.15 Holbein also painted Thomas More’s portrait in 1527—a more personalized masterpiece that captured the surreal dissonance in More’s life between a man of the world and a man of God.
This map may be the message. At first glance it appears to be of a hilly, circular island with a fortified town at its center and a harbor in the foreground sheltering two anchored ships. A closer examination, however, reveals a very different kind of geography. By closing one eye and staring at the illustration slightly off-kilter, we see Utopia transformed into a grinning human skull, a symbol denoting memento mori, the Latin expression meaning “Remember you have to die,” and a familiar trope in both classical Rome and medieval Europe. The island itself represents the skull’s outline. One ship is the neck and an ear; the other ship is the chin, with its mast as the nose and its hull as the teeth. The town is the forehead, with a combination of the hills and the river being the eyes of the skull.16
So what, exactly, was the point of transforming the map of Utopia into the image of a skull? As so often with More and his early-sixteenth-century humanist friends, there’s an element of esoteric humor here, with memento mori being a play on More’s surname and the substitution of the island for a skull representing a classic Erasmian folly. But there’s another, more life-affirming message, which, like the outline of that skull hidden in the map, isn’t immediately obvious to the eye.
The great debate of the early sixteenth century was between Renaissance humanists, such as More and Erasmus, and Reformation preachers like Luther, and it addressed the question of free will. Luther, you’ll remember, in his theory of predestination, presented a God of such absolute power that humans were shorn of their agency. The humanists, however, clung to the idea of free will. More’s Utopia is, indeed, a manifestation of that free will. By inventing an ideal society, More was demonstrating our ability to imagine a better world. And by presenting his vision of this community to his readers, he was inviting them to address the real problems in their own societies.
Utopia, then, is a call to action. It assumes that we possess the agency to improve our world. Therein lies the other significance of that grinning skull in Holbein’s map. In ancient Rome, the expression memento mori was used to remind successful generals of their fallibility. “Memento mori . . . Respice post te. Hominem te esse memento,” the slaves would shout at the triumphant general during the public parade after a great military victory. “Yes, you will die,” the slave reminded the Roman hero. “But until then, remember you’re a man.” In pagan Rome, then, that skull was as much a symbol of life as of death. It was a reminder to cultivate the civic self and to make oneself useful in public affairs while one still had the chance.
In contrast with the technological determinism of Moore’s Law, this law of More’s refers to our duty to make the world a better place. In Utopia too, there is much talk of the “duty” we ought to have to our community. “All laws are promulgated for this end,” More writes, “that every man may know his duty.”
More’s Law, then, is Thomas More’s definition of what it should mean to be a responsible human being. He not only tried to live his life according to this principle; he also died from it, beheaded by his king, Henry VIII, for refusing to sanction Henry’s divorce from his first wife. Being part of the human narrative, More believed, means seizing control of our civic and secular fate.
In today’s age of acceleration, five hundred years after the publication of Utopia, many of us once again feel powerless as seemingly inevitable technological change reshapes our society. As More reminds us, fixing our affairs—by becoming steersmen or pilots of society—is our civic duty. It’s what made us human in the sixteenth century, and it’s what makes us human today.
“Hominem te esse memento,” the Roman slave would remind the victorious general. As we drift with seeming inevitability into a new hyperconnected world, these are words we should remember as we fight to establish our place in this unfamiliar landscape.
In Thomas Friedman’s 2016 bestselling Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations, there is a fifty-page introductory chapter lauding “Moore’s Law” and all but designating it the foundational truth about early-twenty-first-century society.17 But Gordon Moore’s observation about the processing power of silicon chips isn’t a particularly helpful guide, for either optimists or pessimists, to thriving in an age of accelerations. As Dov Seidman reminds us, it doesn’t tell us who we really are as human beings.
More’s Law is more useful because it explains how we should fill that hole in the future with human agency. “Humanity” is now trending in tech. It might not be quite as Manichaean a showdown as the “Technology Versus Humanity”18 or the “Digital Versus Human”19 cage match foreseen by some futurists, but the human costs of the digital revolution are quickly becoming the central issue of our digital society. Everyone, it appears, is waking up to a confrontation that the Israeli historian Yuval Noah Harari frames as “Dataism” versus “Humanism”—a zero-sum contest, he claims, between those who are known by the algorithm and those who “know thyself.”20 And everyone seems to have his or her own fix to ensure that “Team Human,”21 as the new media guru Douglas Rushkoff puts it, wins.
Everyone, it seems, wants to know what it means to be human in the digital age. A few days before the “Encrypted and Decentralized” event at the Alte Teppichfabrik, for example, I participated in a lunch discussion in Berlin that was unappetizingly called “Toward a Human-Centered Data Revolution.” The month before, I’d spoken at Oxford about “The True Human,” in Vienna about “Reclaiming Our Humanity,” and in London about why “The Future of Work Is Human.” Klaus Schwab, the Swiss founder of the World Economic Forum, exemplifies this preoccupation with a new humanism. It “all comes down to people and values,” he explains about the impact of digital technology on jobs,22 which is why we need what he calls “a human narrative” to fix its problems.23
To write a human narrative in today’s age of smart machines requires a definition of what it means to be human. “As soon as you start defining the question what is human, it becomes a belief,” the writer and inventor Jaron Lanier once warned me over lunch in New York as we prepared for a debate about the impact of AI on humanity. Lanier may be right. But in a world in which we’ve invented technology that is almost human, it seems only natural that we would want to compare ourselves with smart machines in an effort to define both ourselves and this new technology. Besides, if we don’t believe in our own humanity, then what can we believe in?
To come up with a distinction between human beings and computers, I spoke to Stephen Wolfram, the CEO of the Massachusetts-based computer software company Wolfram Research and one of the world’s most accomplished computer scientists and technology entrepreneurs. Educated at Eton, Oxford, and Caltech, Wolfram was awarded his doctorate in theoretical physics at the age of twenty and a MacArthur Fellowship at twenty-two, the youngest person ever to have received one of these $625,000 “genius” awards. He is the author of the bestselling and critically acclaimed A New Kind of Science. He’s the creator of the influential mathematical software program Mathematica and the curated online knowledge resource WolframAlpha, a kind of superintelligent Google, which, among many other things, is the engine providing the factual answers to queries submitted to Apple’s Siri. And, as if all that weren’t enough, he’s the inventor of the Wolfram Language, a programming language built on top of Mathematica and WolframAlpha that is designed to help us communicate with computers.
I first met Wolfram in Amsterdam at the Next Web Conference. But rather than discussing the abstract future, we spent a most pleasant evening chatting about our personal futures—our own children. He is a champion of homeschooling, with a couple of his kids being educated at home by him and his mathematician wife. His mother, Sybil, was a teacher too—an Oxford University philosopher with a particular interest in Ludwig Wittgenstein’s philosophy of language.
“What do I do?” Wolfram repeats my words gingerly, as if nobody had ever asked the multimillionaire software entrepreneur, world-famous physicist, and bestselling writer such a challenging question.
He explains that what he does—or at least tries to do—is teach humans to understand the language of machines. He is building an AI language we can all understand.
“I want to create a common language for machines and people,” he tells me. “Traditional computer languages pander to machines. While natural human language isn’t replicable in machines.”
I ask him if he shares the fear of AI pessimists who believe that technology could develop a mind of its own and thereby enslave us.
Computers, the thinking machines imagined by the Victorian mathematician Ada Lovelace and her business partner, Charles Babbage, in the mid-nineteenth century, are the defining invention of the last couple of hundred years, Wolfram explains. But the one thing that they don’t possess, he insists, is what he calls “goals.” Computers don’t know what to do next, he says. We can’t program them to know that. They couldn’t write the next paragraph of this book. They can’t fix the future.
Wolfram is a great admirer of Ada Lovelace, and his argument is, essentially, a rephrasing of her thoughts on the intellectual limitations of computer software. “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform . . .” Lovelace famously wrote in 1843. “Its province is to assist us in making available what we are already acquainted with.”24
“If a lion could speak, we could not understand him,” Wolfram says, quoting one of Ludwig Wittgenstein’s most elliptical aphorisms from his Philosophical Investigations. And the same is true of even the smartest thinking machines, he says. If one of these machines could talk, we couldn’t understand its real meaning, because of our differences. If machines could speak our language, they wouldn’t be able to fully understand us, because we have goals and they, as Ada Lovelace explains, can’t “originate” anything.
The truth about the meaning of “humanity” is that there’s no truth. No absolute truth, at least. Every generation defines it according to its own preoccupations and circumstances. So, for example, the humanism of the original Renaissance was rooted in discovering and then reconnecting with a history that had been lost in the Dark Ages. For Thomas More or Niccolò Machiavelli, being human meant putting on the robes, sometimes quite literally, of antiquity. Five hundred years later, our preoccupations and circumstances are very different. What it means to be human today is bound up in our relationship with networked technology, particularly thinking machines. If there is to be a new renaissance, this relationship with smart tech will be the core of its new humanism.
Wolfram’s definition, with its focus on human volition, is both timely and timeless. Our unique role in the early twenty-first century is, in Ada Lovelace’s words, to be able to “originate” things. This is what distinguishes us from smart machines. But it’s also an updated version of More’s Law, with its reminder of our mortality and its focus on our civic responsibility to make the world a better place.
The way to solve the most vexing problems of the future, the WEF CEO Klaus Schwab says, is by creating a story about people—a human narrative. And that’s the goal of this book too. In the midst of today’s great digital transformation, this story features the many solutions of many different people in many different places to the many challenges of our new network epoch. They are all filling that hole in the future. Obeying More’s Law, they are trying to design a new operating system for humans rather than for machines. What unites them all is their insistence that, in the age of the smart networked machine, we humans must seize back control of our own fate and, once again, author our own story.
The nineteenth-century neighborhood is full of twenty-first-century things. I’m with my old friend John Borthwick, the founder and CEO of Betaworks, a New York City–based venture studio that incubates technology start-up companies. We are at the Betaworks studio in New York’s Meatpacking District—the downtown Manhattan neighborhood named after its industrial-scale slaughterhouses—which is now one of the city’s most fashionable areas. Along with its cobbled streets, boutique stores, exclusive clubs, and restaurants, the area is best known as being the southern terminus of the High Line—the section of the old New York Central Railroad that has been successfully reinvented as a three-mile-long elevated public park.
Borthwick’s studio is located in a cavernous old brick building that has been converted from a decaying warehouse into an open-plan workspace. The place is lined with young computer programmers—Betaworks’ so-called “hackers in residence”—peering at electronic screens. It’s a kind of renaissance. The analog factory has been reborn as a digital hub. These hackers are manufacturing the twenty-first-century networked world from inside a nineteenth-century industrial shell.
But this new world is still in beta—the word the tech industry uses to describe a product that’s not quite ready for general release. And it’s this emerging place—betaland, so to speak—that I’ve come to talk about with Borthwick. We’ve been friends for years. Like me, he was a start-up entrepreneur during the first internet boom of the mid-nineties. In 1994, fresh out of Wharton business school, he founded a New York City information website called Ada Web, in honor of Ada Lovelace. Borthwick sold Ada Web and several other internet properties to the internet portal America Online in 1997 and became their head of new product development. He then ran technology at the multinational media conglomerate Time Warner before founding Betaworks in 2008, and there he’s made his fortune investing in multibillion-dollar hits such as Twitter and Airbnb.
“I fell in love with the idea of the internet,” Borthwick says, explaining why he became an internet entrepreneur, articulating the same faith as such mid-twentieth-century pioneers as Norbert Wiener that networked technology could pilot us to a better world. It was the idea that a new networked world could be better than the old industrial one. The idea that the internet could transform society by making it more open, more innovative, and more democratic.
Over the last quarter century, however, Borthwick’s youthful faith in this idea has evolved into a more ambivalent attitude toward the transformative power of digital technology. As we sit in one of the studio’s meeting rooms, surrounded by his hackers in residence, we speculate on the networked world on the horizon. The innocence of the nineties, the faith in the internet’s seemingly unlimited potential—all that openness, innovation, and democracy—has been replaced by the realization that things aren’t quite right in betaland.
As we talk, we realize we agree that today’s vertiginous atmosphere of social divisiveness, political mistrust, economic uncertainty, and cultural unease is—in part, at least—a consequence of the digital revolution. In contrast, however, with the crusading Edward Snowden, Borthwick is realistic rather than pessimistic about the future. He understands as well as anyone the remarkable achievements of the digital revolution, but he is cognizant of the problems too. He is—like me—a maybe.
So, how to rebuild the future and manifest the human agency that Snowden says we’ve lost? “Five fixes, John,” I say. “Give me five bullet points on how we can fall back in love with the future.”
