35,99 €
For the student and general reader, a tour of the digital universe that offers critical observations and new perspectives on human communication and intelligence.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 440
Veröffentlichungsjahr: 2012
Contents
Cover
Title Page
Copyright
Dedication
Preface
Acknowledgements
Some Key Terms
Part I: Introduction and Framing
Chapter 1: The Digital Universe: A “Quick-Start” Introduction
Three Types of Digital Literacy
Becoming Critically Literate
Rhetorical Literacy
Cybernetics
Navigating this Text
Chapter 2: Thinking About Moore's Law
The Prediction
Implications for Computing and the Digital Universe
Technological Determinism
The Rise of Nanotechnology and the Future of Moore's Law
Chapter 3: Critical Perspectives
E-mail and the Age of Interruption
Jacques Ellul's Critique of Technology
The Tao of Digital Technology – Yin and Yang
Is the Digital Future One of Doom and Gloom?
Negotiating the Role of Technology in Modern Life
Part II: Internet and Web History
Chapter 4: Origins of the Internet
Foundations
DARPA's Information-Processing Technology Office
Paul Baran and the Survivable Communications Network
Development of the ARPANET
Licklider to Taylor to Roberts at ARPA
Building the ARPANET
The Father of All Demos 43
Chapter 5: Origins of the Internet
Part 1 – From ARPANET to Internet
The Development of TCP/IP
The Emergence of the Personal Computer
Internet Growth in the 1980s
Chapter 6: The Web
The First Web of Information
Ted Nelson's Dream of Xanadu and Douglas Engelbart's oN-Line System
The Development of the Web
Mosaic, AOL, and the growth of the Web
Web 2.0 and the Architecture of Participation
Facebook as a Case Study
Part III: Telecommunication and Media Convergence
Chapter 7: Telecommunication and the “Flat” World
“What hath God wrought”
The Atlantic Cable
Communication, Empire, and Harold Innis
Evolution of the Flat World
Chapter 8: Digital Media Convergence
Convergence
Analog to Digital
Xerox's PARC
Atoms and Bits – Benefits to Digitization
Five Digital Attributes
Part IV: Internet Control, Cyberculture, and Dystopian Views
Chapter 9: The Public and Private Internet
Internet Management and Governance
Privatization of the US Internet in the 1990s
The International Struggle Over Internet Governance
The Day that Jon Postel Seized Control of the Top-Level Domains
ICANN as the Middle Path
The Internet as a Medium of Democratic Communication
The Social Embeddedness of the Internet
Chapter 10: Censorship and Global Cyberculture
Censoring the Internet
Internet Censorship in Iran
The Great Firewall of China
Bypassing the Great Firewall
The US Government and WikiLeaks
Global Information and Communication Technology Use
Chapter 11: The Dark Side
Privacy and the Digital Universe
Privacy and Population
Changing Public Perceptions of Privacy
The Online Privacy Continuum
The Surveillance Society
The Invisible Databases
Global Threats to the Internet
Cyber Warfare
Part V: New Communication Technologies and the Future
Chapter 12: Wired and Wireless Technologies
Wired is Not Tired
The Diffusion of Broadband Internet Access
The Wireless Phone Revolution
Global Wireless Telephony
The Social Effects of Mobile Phone Use
Chapter 13: Virtual and Augmented Worlds
The Sensorama and Morton Heilig
The State of Digital Reality
Sketchpad and Computer Graphics
Virtual Reality
From ArchMac to Google Earth
Video Games as Virtual Worlds
Two Virtual Worlds: Second Life and World of Warcraft
Augmented Reality
Replicating the World in 4-D
Chapter 14: The Future of the Digital Universe
The Future of the Cloud
Augmented Human Intelligence
The Flash Crash and Other Dystopian Tales
Predictions of Superhuman Intelligence
Critical Perspectives
A Humanistic Perspective
Index
This edition first published 2012
© 2012 Peter B. Seel
Blackwell Publishing was acquired by John Wiley & Sons in February 2007. Blackwell's publishing program has been merged with Wiley's global Scientific, Technical, and Medical business to form Wiley-Blackwell.
Registered Office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
Editorial Offices
350 Main Street, Malden, MA 02148-5020, USA
9600 Garsington Road, Oxford, OX4 2DQ, UK
The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, for customer services, and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.
The right of Peter B. Seel to be identified as the author of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication data is available for this book.
ISBN 9781405153294 (hardback)
ISBN 9781405153300 (paperback)
This book is dedicated to my loving wife and life partner
Nanci Eileen Seel
who endured many hours of separation over several years while I worked on the manuscript. She has accomplished more than she is aware to make our academic and creative endeavors a success, while creating a warm home for our family. This book would not have been possible without her support.
*
The book is also dedicated to two scientists and humanists who intimately understood the digital universe from very different perspectives – and were both visionaries that I wish I could have worked with:
Joseph Carl Robnett Licklider
(1915–1990)
computer scientist and intellect augmentation visionary
and
Neil Postman
(1931–2003)
semanticist and astute critic of technology
Preface
This book is about computer-based information and communication technologies and their substantial effects on contemporary life. In more developed nations, digital displays are found everywhere, from small ones on mobile phones to enormous projected ones in movie theaters. The typical worker in the information age spends her or his day engrossed with digital technology and then goes home to yet another set of digital devices for communication, information-processing, and entertainment. These technologies have given society an unparalleled range of tools for communication and connectivity. Anyone in the world with a mobile telephone – presently five billion people of the earth's population of seven billion – can be reached with a few keystrokes. Increasing proportions of these subscribers will have full Internet access as they upgrade to 3G and 4G services, and their tiny mobile phones may be one key solution to bridging the digital divide between the information “haves” and “have nots” on the planet.
This is an unprecedented era in the evolution of humanity. During the lifetime of those born after 1940 there has been an astonishing augmentation of human intellect by online access to all of the world's collective stored information. The barriers to planetary communication presented by the babel of human languages have been diminished by online translation, and their accuracy will improve in this century. Access to this sea of information is not enough – we as a society must have the intellectual tools to make sense of it all and the individual and societal wisdom to use it wisely. Digital devices have improved our access to knowledge, but cannot yet make us wise.
In my own lifetime, I have witnessed the power of television to telecast events in real time as they occur anywhere on the planet. I started my career in educational technology and media production just as the first personal computers appeared on desktops in the workplace. We connected them to VCRs to deliver computer-based training programs linked to related video programs. While working on my doctorate in the early 1990s, I recall a friend dragging me into a computer lab to see something new online called the World Wide Web. At the time we had no clue that a day would come when anyone could create a personal website in less than 30 minutes using templates available at Weebly, Wix, or Google Sites. The notion that a website dedicated to building social relationships would have over 800 million worldwide subscribers would have been hilarious in 1995 – now I access my Facebook page daily to look for new posts from my friends. We live in an era with access to remarkable information and communication technologies that call to mind Arthur C. Clarke's observation that “any sufficiently advanced technology is indistinguishable from magic.”
This book is about the global use of these technologies and their effects on society. Some of these effects are beneficial in enhancing human communication and understanding. Others are less benign as they encourage increasingly sedentary lifestyles and technological dependence. The stories of how information and communication technologies evolved to those that we use daily is a fascinating one and form a significant part of this book. The contemplation of the future of these technologies as we augment our personal and collective intelligence is a compelling topic that we will examine in these chapters. My hope is that the exploration of these themes will encourage you to think critically about the technologies that you use today and how they might enhance or detract from human life in the future.
Acknowledgments
This book would not have been completed without the ongoing support of Elizabeth Swayze, my editor at Wiley-Blackwell. She believed in the importance of the topic and provided ongoing encouragement despite several missed deadlines. Boston project editors Julia Kirk and Allison Kostka advised on image rights and permissions and kept the pressure on to acquire the needed clearances. The text was insightfully edited by Janet Moth in the United Kingdom – her suggested revisions were always an improvement.
Useful comments on the text were provided by my longtime friend and co-author on previous book projects, Dr. August E. “Augie” Grant of the University of South Carolina. Paul Saffo and Helayne Waldman provided early guidance for the project. Amy Reitz and Carol Anderson-Reinhardt assisted in editing chapters and their feedback is gratefully appreciated. Nicole Brush translated the correspondence in French with the Mundaneum staff in Mons, Belgium, concerning the photos of Paul Otlet. Professor Don Zimmerman of Colorado State University was helpful in providing support from CSU's Center for Research on Writing and Communication Technologies. Johannah Racz provided an excellent index. I would also like to thank the Public Communication and Technology graduate students enrolled in my Telecommunication seminar for their insightful comments on the text – especially Lisa Gumerman and Rachel Timmons.
Assistance in locating photographs for inclusion in the book was provided by Marianne Heilig for her father's photos, George Despres at MITRE, Lauren Skrabala at RAND, Angela Alvaro at Banco de España, Leonard Kleinrock at UCLA, Dina Basin at SRI, Christine Engelbart and Mary Coppernoll at the Doug Engelbart Institute, Jayne Burke at NYU, Jan Walker at DARPA, Eric Mankin and Claude Zachary at USC, Shana Darnell at CNN, Sophie Tesauri at CERN, and by photographers Patrick Troud-Chastenet, Irene Fertik, and Gary MacFadden. Peter J. Seel assisted with the simulated texting-while-driving photo. Many images were provided by photographers via the Creative Commons, and this has become a helpful resource for authors and educators worldwide.
Moral support in the long march to the completion of this book was provided by my family, my sister Deborah Ungerleider, and friends Kevin Nolan, Cindy Christen, and Ken Berry. There have been many helping hands in a project of this magnitude and duration – thank you all.
Some Key Terms
AIartificial intelligenceARaugmented realityBBSBulletin Board SystemCADcomputer-aided designCBTcomputer-based trainingCPUcentral processing unitCRTcathode ray tubeDBSdirect broadcast satelliteDDoSdistributed denial-of-serviceDNSDomain Name SystemGUIgraphical user interfaceHCIhuman–computer interaction/interfaceHMDhead-mounted displayICintegrated circuitICTinformation and communication technologyIMPInterface Message ProcessorIPInternet ProtocolISPsInternet Service ProvidersLANlocal area networkMRAMmagnetoresistive random-access memoryNCPNetwork Control ProtocolOSoperating systemP2Ppeer-to-peerTCPTransmission Control ProtocolTIPsTerminal Interface ProcessorsUDCUniversal Decimal ClassificationUGCuser-generated contentVoIPVoice over Internet ProtocolVRvirtual realityWANwide-area networkPart I
Introduction and Framing
Chapter 1
The Digital Universe: A “Quick-Start” Introduction
Consumer electronic products have become so complex and feature-rich that it is now commonplace to find a brief “quick-start” guide or poster that accompanies the 100+-page manual for a new digital television set, personal computer, or mobile phone. Manufacturers understand that impatient consumers (and that includes most of us) generally skip reading the manual first – that is, until a non-intuitive feature stumps the user. Then we are likely to call the helpline instead of referring to the manual, much to the exasperation of call center agents around the world. On a positive note, quick-start guides provide enough basic information so that we can successfully install the software or power up the device and quickly begin using it.
This brief introduction serves as the “quick-start” guide for this book, which is not a manual or a how-to text for functioning in our digital world. Rather, this book provides a tour of the digital universe, tracing the evolution of the age of information from its inception to the crucial period in which we live today.1 Digital universe is a term that describes a global human environment saturated with intelligent devices (increasingly, wireless ones) that enhance our ability to collect, process, and distribute information. A key purpose of the book is to stimulate readers to think critically about the pervasiveness of information and communication technologies (ICT) in contemporary societies and how they affect our daily lives. The digital universe that we inhabit is complex and becoming more so as technology evolves and becomes more ubiquitous. “Ubiquity” is a key term that will be used frequently throughout the book – it means to be present in every place, or “omnipresent.” It is often used as part of a commonly cited technology term, “ubiquitous computing,” that describes an environment where computers and intelligent devices are omnipresent. This describes the future of the human environment in societies around the world.
We live in an interesting period in human evolution due to the diffusion of information and communication technologies. The future of machine-assisted communication and related developments in information-processing and artificial intelligence hold great promise for – as well as potential hazards to – human well-being. Information technologies play a central role in when, where, and how we communicate with each other, and their centrality will increase in the future. These technologies are now pervasive in our lives at work and at home, and have blurred the boundaries between these locations to the point where they are often indistinguishable. Digital citizens are connected and “linked in” 24 hours a day – seven days a week. Lewis Mumford made the observation that any widely adopted technology tends to become “invisible” – not in a literal way, but rather in a figurative sense.2 Television and computer displays have become so ubiquitous that we don't think twice about seeing them in classrooms, airports, taverns, and certainly in the workplace. At times on a university campus it appears that everyone has a mobile phone and is busy either texting a friend or talking with them. This would have been a remarkable sight in 1995, but today it is so commonplace that few notice. We are surrounded by telematic devices to a degree that would have been unimaginable in the 20th century, and they will become even more pervasive as they become more powerful and useful in the 21st.3
My hope is that in the process of reading this book you will become a more critical observer of the social use of ICTs, that you will assess the positive and negative consequences of using them, and that you will gain new perspectives in the process that will add richness and depth to your knowledge of human communication and intelligence.
Three Types of Digital Literacy
Stuart Selber provides a useful model for computer literacy that we might apply to our study of the digital universe. He defines three distinct types of literacy (see Table 1.1).4 First, people in the teleconnected world should have a functional literacy with computers and software as tools to be used in daily life. In the journalism department at the university where I teach computer-mediated communication, we devote extensive time using expensive hardware (and constantly updated software) to teach prospective journalists and communicators how to use these digital tools. In fact, much of what we term computer education around the world is focused on teaching hardware and software usage. However, Selber makes the astute observation that this type of education provides only one aspect of the literacy that humans need to function in a world filled with digital technologies; digital citizens should also be critically and rhetorically literate.
Table 1.1 Three types of computer literacy.
Becoming Critically Literate
The second category in Selber's model is critical literacy. It assumes the social embeddedness of technology in all networked global societies and highlights the cultural, economic, and political implications of its use. Critically literate users are “questioners of technology” and its applications, and they examine both the positive and negative implications of technology adoption. This is a key theme in this book and an essential aspect of becoming an educated user of technology.
Positive affirmations of information and communication technologies are omnipresent. Hardware manufacturers, software producers, consumer electronics retailers, and the marketing infrastructure that promotes these products and services all ensure that we are aware of their positive attributes. When an innovative information or communication technology is introduced, the advantages are widely touted as part of the marketing campaign. The attributes are often focused on improving the speed of telecommunication, making an information-processing task more efficient, or a combination of these two factors. As consumers adopt these products, the negative consequences are often slow to emerge.
Selber's critical-cultural perspectives of ICT are focused on the examination of hegemonic power relations in society. These perspectives are significant, especially in terms of studying the ramifications of the digital divides that exist between those who have access to information and those who do not. Economic and political perspectives are also useful in studying technology standardization decisions, among other key policy issues. However, I encourage readers to expand their critical perspectives beyond the economic and political to examine fundamental issues of human communication and its automation. For example, how does the mediation of communication (putting a machine in the middle) affect human expression and discourse? Are humans losing a key aspect of the oral communication tradition valued by scholars such as Harold Innis – or has it been repurposed by the mobile phone and the video camcorder? How have communication technologies affected human storytelling traditions and the stories we tell? The critical component of digital literacy is thus focused on the social effects of the use of information and communication technology. It is a rich field of study that encompasses consumer behavior, human psychology, political science, language, philosophy, economics, and human – computer interaction. Some of the most interesting questions about the human use of technology are investigated by social and computer scientists in these fields. In this text we examine the perspectives of critical observers of technology including Harold Innis, Lewis Mumford, Jacques Ellul, Marshall McLuhan, and Bill McKibben.
One of the more perceptive critics of the social use of technology is the late Dr. Neil Postman, a New York University professor, semanticist, and widely read social critic. Postman is the author of Technopoly, an insightful critique of the role that technology plays in advanced information societies.5 His critical perspectives will be addressed in subsequent chapters, but a few key points are relevant here. For Postman and his critical colleagues, knowledge of the history of the development of technology is essential. One cannot predict the future development trajectory of any information or communication technology without understanding its evolution to the present. The history of computing technology is filled with fascinating stories of how “computers” evolved over time from what used to be a human profession to chips found in billions of intelligent devices. While this text is not a comprehensive history of the evolution of ICT from the telegraph to the present day, I have provided the necessary background to comprehend the social context of these technologies and their effects. It is not ironic that studying the history of the evolution of information and communication technology is inherently humanistic. Stories about the development of telegraphy, telephony, television, and the Internet are fundamentally about human creativity, altruism, greed, and ambition. This historical background is presented as needed in a non-linear fashion that you are likely familiar with in locating information online.
Rhetorical Literacy
The third type of digital literacy referenced by Selber is rhetorical literacy. In this context digital technologies are conduits for “hypertextual media” and individuals are viewed as “producers of technology.” This viewpoint describes the world of what is termed Web 2.0 today and Web 3.0 of the near future. We take the power of hypertext and hypermedia for granted in a world where they are found in all online environments. The ability to seamlessly and easily link related content online has transformed the human processing and distribution of information.
The concept of linking information and building webs of knowledge was espoused by Belgian bibliographer Paul Otlet and integrated into his Mundaneum project in Brussels in the early 20th century.6 Additional detail is provided about Otlet and his ideas in Chapter 6; however, an introduction is appropriate in the context of rhetorical literacy. Otlet's vision was to create a massive catalog of all human knowledge and creative work and then provide access to it using electrical communication. An inquiry from a user on any topic would be directed to the Mundaneum in Brussels by telegraph or telephone, where the staff would access millions of index cards (much like a library card catalog of that era) to locate the answer. The return response to the requester was communicated by telegraph or telephone. Otlet's dream in the 1930s was to use a then-new technology known as television to relay the information (with related visuals) back to the requester. His visionary scheme exists online today in the form of Wikipedia, Google, and the Web.
Vannevar Bush in 1945 expanded on Otlet's Mundaneum concept with an idea for an electromechanical system for linking information (both textual and visual) in his Memex.7 The Memex would have recorded and stored information on the then-new medium of microfilm, but the unique concept in Bush's device was a system of switches that would record information about the linkages made between various forms of related content. He termed these linkages “associative trails,” and the concept was a harbinger of what is known today as online hypertext. The flaw in Bush's concept was the lack of a universal cataloging system similar to Otlet's that would allow random access to the information sought. Bush's “As We May Think” article in Atlantic Monthly and Life magazines was very influential in shaping the information-access dreams of a generation of computer scientists in the mid-20th century.8
Among them was information scientist Ted Nelson, who coined the term “hypertext” in the 1960s as a means of describing “branching and responding” textual links between related information.9 As part of his Project Xanadu10 to make all human information accessible to all on Earth, he also described “hypermedia,” which is related content not constrained to be text, or what we know at present as “multimedia.” In the early 1990s Tim Berners-Lee used the fundamental concepts of hypertext and hypermedia to construct his “Mesh” system of linked documents that evolved into the World Wide Web.11
In the era of Web 2.0, citizens of the digital universe are not just passive downloaders of digital online media, but increasingly are active producers of new content. This video, text, music, art, and sound content may be digitized and uploaded to the Web as linked hypermedia. The creation and communication of user-generated content (UGC) online has transformed a digital universe dominated by computer scientists and highly specialized Web developers into a global society where anyone can publish anything – that governments will allow.
Cybernetics
Another key aspect of digital literacy is deciphering the source of key terms related to information and communication technology. The archaic meaning of “communication” was to literally hand a message from person to person, as would a messenger in ancient Greece. One might think that “broadcasting” applies only to radio and television, when its etymology is derived from an agrarian term meaning “to sow.” Before the invention of mechanical planting machines, farmers would walk through their fields and “broadcast” the seeds for a new crop by scattering them by hand. Today electronic messages are “scattered” through society through the air by phone, radio, and television and via fiber-optic cables on land and under the sea.
A key term for the digitally literate is “cybernetic.” It is derived from the Greek term “kybernetes,” meaning a pilot, steersman, or governor.12 The modern derivation is that cybernetics involves feedback mechanisms providing command-and-control functions in closed systems. Cybernetic perspectives assist in understanding complex systems that include circular causal chains that make up feedback loops that regulate the functioning of a system. The study of cybernetics applies to many diverse disciplines, but the focus in this text is on its relevance to information and communication systems.
The root “cyber” has been embedded into many commonly used terms involving ICTs, such as “cyberspace” (e.g., the digital universe), “cyberpunk” as a style of postmodern literature, and “cyborg” to describe a bionic blend of human and machine. Cybernetics should not be construed as applying only to machine-based systems. All humans rely on cybernetic feedback loops in our bodies to manage vital functions such as respiration and blood circulation – and especially for communication with others.
We learn how to acquire new digital knowledge and skills through elaborate feedback loops with friends and family and with formal instruction. You try your hand at taking digital photos and then sharing them online with friends. You receive useful feedback about your photographs and modify your image acquisition and processing skills accordingly. In a Web 2.0 universe the feedback loops may be immediate and personal (“I don't like my picture taken at the party last weekend – please delete it”) or it may be distant and more impersonal (bidding on a digital camera on eBay). These interactive mechanisms are at the heart of related Web 2.0 technologies such as Wikipedia. With social networking and other Web 2.0 tools you can expand your feedback options and use them to acquire new knowledge and skills, especially those concerned with new telecommunication technologies. This text provides the background needed to understand the evolution of these technologies and then encourages you to think critically about how they affect human life today and in the future.
Navigating this Text
This book is divided into five main sections:
Part I. Introduction and Framing – Chapters 1, 2, 3
Part II. Internet and Web History – Chapters 4, 5, 6
Part III. Telecommunication and Media Convergence – Chapters 7 and 8
Part IV. Internet Control, Cyberculture, and Dystopian Views – Chapters 9, 10, 11
Part V. New Communication Technologies and the Future – Chapters 12, 13, 14
As noted above, this text is written for non-linear access so chapters can be read in random order if desired. However, it is probably best to read the Moore's law and critical perspectives chapters (2 and 3) first, since key concepts introduced there are elaborated upon in subsequent chapters. Also, the history chapters (4–6) will be more coherent if read in sequential order.
Chapter 2 defines Moore's law and explains its centrality to technologies in the digital universe. Its implications for telecommunication, ubiquitous computing, and intelligent devices are examined in the context of their effects on daily life. The chapter concludes with thoughts on the sustainability of Moore's law in this century. Chapter 3 provides the critical analysis of the digital universe that was alluded to in Selber's literacy model. The perspectives of critics of technology such as Jacques Ellul and Neil Postman are examined in regard to their application to information and communication technologies. The pro-social and pathological effects of living in the age of information are discussed – with an emphasis on the role that speed and efficiency play in the adoption of new communication technologies.
Part II is focused on the creation of the Internet and the World Wide Web. Chapter 4 reviews the origins of the Internet in the Cold War and the influential role that computer scientist J. C. R. Licklider played in its development. The central role of the US Department of Defense in creating the Advanced Research Projects Agency (ARPA) and its ARPANET highlight the controversy over the motivation for developing the first nationwide data network. Chapter 5 analyzes the evolution of the ARPANET into the Internet between 1980 and 1990. The contributions of key innovators such as Vinton Cerf and Robert Kahn (developing TCP/IP and other key network protocols), Ted Nelson (the concept of hypertext as a linking tool), and Doug Engelbart (creating interface technologies) are discussed in the context of the creation of the global Internet. Chapter 6 introduces Paul Otlet and his creation of the Mundaneum in Belgium between 1910 and 1934 – a precursor of the World Wide Web 60 years before its creation. The role of Tim Berners-Lee is examined in his conceptualization of the merger of hypertext, TCP/IP, and a domain name system into a universal document accession system he called “Mesh” (and the world now knows as the Web). The chapter concludes with an analysis of what we call Web 2.0 and how it might evolve in the coming decade into Web 3.0.
Part III begins with Chapter 7 and a review of the development of telegraphic communication systems in Europe and North America and their linkage via undersea cables. These quickly spanned the globe and led to the concept of a “wired world.” As the wires were converted from copper to fiber-optic cables in the past 20 years, these often overlooked connections made possible the global Internet. The “flat world” described by Thomas Friedman is defined by these connections and how they facilitate the role of telecommunication in outsourcing digital work and in the creation of global teams by public and private organizations. Chapter 8 focuses on digital convergence in the shift from analog to digital media. The benefits of media convergence are examined, along with its negative effects on existing media such as newspapers and radio and television broadcasting.
Part IV begins with Chapter 9, on the battles over public and private control of the Internet. The role of e-commerce is studied in the context of this struggle for control over the past 20 years. In Chapter 10 we examine global cyberculture and the role of digital telecommunication in fostering this new culture. The perspectives of media critic Marshall McLuhan are examined in light of what he called the electronically connected “global village.” Digital divide issues are studied in terms of disparities in access to these digital services in various parts of the world. The emergence of global social networks is an outgrowth of the bonds formed by early pioneers on the Internet that transcended space and time. However, there are attempts by some governments to limit free access to the Internet, and these are examined in the context of national priorities that promote censorship and the construction of intentional barriers to the free flow of information. Chapter 11 deals with the “dark side” of the Internet. It examines online privacy issues and the threats to personal privacy and data security posed by hackers, viruses, and Web-bots. It concludes with an outline of several simple steps that we can take to protect our privacy online and shield personal information from unwanted disclosure.
The final section – Part V – is focused on the evolution of new telecommunication and digital technologies that will affect global societies in coming decades. Chapter 12 examines the blended universe of wired and wireless communication technologies. Television has morphed from a wireless broadcast technology to a wired one via cable services and with online content streamed over the Internet using IPTV. Telephones are now mobile television viewers with content streamed live from the Internet or wirelessly accessed from local broadcasters. Mobile phones provide the always on, always accessible means of staying in contact with family and friends. The mobility of these services means that there will be no “away” from ICTs, and the chapter analyzes the social ramifications of being continuously connected. Chapter 13 explores the creation of virtual worlds that humans can inhabit through participation in online games. Computer games have come of age in the past two decades and have achieved a remarkable level of realism that makes active participation compelling. The chapter will also examine new applications of immersive “augmented realities” that superimpose computer-generated images over related scenes in the material world.
The book concludes with Chapter 14, which provides several perspectives on the future of the digital universe. The immediate future is bright as Moore's law drives down the cost of digital tools while greatly improving their power and our access to them. As the digital divide shrinks, more humans will have access to these tools to connect and work with others. Some future ICT scenarios are utopian – that humans will co-evolve with technology and adopt the best aspects of machine intelligence and memory. Others are dystopian – that machine intelligence will eventually surpass that of humans and our role in the future may be that of maintenance staff for the cybernetic world. The reality will likely be somewhere between these polar visions. Why spend time thinking about these futures? Each of you will spend your lifetime living there, so giving some critical thought to these scenarios may be instructive. I hope you enjoy this journey through the digital universe as a virtual road map for connected life in the decades ahead.
Notes
1. By “we,” I am referring to citizens of the planet Earth who use information and communication technologies. This would include most of the 90 percent of the world's population that will have mobile phone access (but not necessarily possess one) by 2020.
2. L. Mumford, Technics and Civilization (New York: Harcourt, 1934).
3. “Telematics” is another term used to describe information and communication technologies.
4. S. A. Selber, Multiliteracies for a Digital Age (Carbondale, IL: Southern Illinois University Press, 2004).
5. N. Postman, Technopoly: The Surrender of Culture to Technology (New York: Vintage, 1992). Neil Postman died in 2003 at the age of 72, a loss to his community at New York University and to all who value his perceptive contributions to education, the study of semantics, and critical views of technology.
6. P. Otlet, International Organisation and Dissemination of Knowledge: Selected Essays of Paul Otlet, ed. W. B. Rayward (London: Elsevier, 1990).
7. V. Bush, “As We May Think,” Atlantic Monthly (July 1945), 101–8. “Memex” is a portmanteau of memory and index.
8. Ibid. The article was republished with illustrations in the September 10, 1945 issue of Life magazine.
9. T. H. Nelson, Literary Machines: The Report on, and of, Project Xanadu Concerning Word Processing, Electronic Publishing, Hypertext, Thinkertoys, Tomorrow's Intellectual Revolution, and Certain Other Topics Including Knowledge, Education and Freedom (Sausalito, CA: Mindful Press, 1981).
10. Ibid.
11. T. Berners-Lee, Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by its Inventor (New York: HarperOne, 1999).
12. The Greek word kybernan is also the source of the English word “govern.”
Chapter 2
Thinking About Moore's Law
Figure 2.1 Gordon Moore in front of a projected and greatly enlarged silicon wafer containing many integrated circuits. “Moore once computed that if the automobile industry followed a similar doubling pattern as ICs [integrated circuits], cars today would get 100,000 miles per gallon, travel at speeds of millions of miles per hour, and be so inexpensive that it would cost less to buy a Rolls-Royce than to park it downtown for a day. However, a friend pointed out that the car ‘would only be a half-inch long and a quarter-inch high,’ and not very useful at those dimensions.” (Michael Kannellos, 2005) Photo: Copyright © 2005 Intel Corporation.
Few phenomena in the digital universe have had such a profound effect on information and communication technology as Moore's law. It can be stated succinctly in two different ways:
“Transistor density on integrated circuits doubles about every two years.” – The Intel Corporation1“The size of each transistor on an integrated circuit chip will be reduced by 50 percent every twenty-four months.” – Raymond Kurzweil2The doubling of computer central processing unit (CPU) speed and storage capacity every two years since 1958 has dramatically affected every type of digital technology. This doubling presents an exponential growth rate in computing and storage capacity that is astonishing for its longevity over half a century (see Figure 2.2). Consider any device that you use daily that has a digital processor or storage chip in it – a mobile phone, portable music player, digital camera, tablet computer, television set, or any other device that can process or store digital information. The simultaneous miniaturization and exponential expansion of the processing power of these chips makes it possible for a mobile phone to include a music player, Internet browser, video camera, and GPS location finder. Next time you use your mobile phone, consider its power as an information-processing device and think about the reaction of Alexander Graham Bell if he could see it demonstrated.
Figure 2.2 Moore's law holding true. Source: Moore's law diagram by Wgsimon.
Dr. Yale Patt, a computer scientist at the University of Texas at Austin, addresses Moore's law in his lectures by asking the following question of the audience:
What is Moore's law about?
(a) Physics?
(b) Computer process technology?
(c) Computer micro-architecture?
(d) Psychology?
The correct answer, according to Dr. Patt, is (d) – Psychology.3 His thesis is that Moore's law has become a self-fulfilling prophecy. Designers of integrated circuits (and their managers) at Intel, Hitachi, AMD, and other chip manufacturers have psychologically adapted to the expectation that there will be a new generation of chips every 18–24 months with double the capacity of previous versions. If Intel does not deliver new chips with this improved capacity, its executives know that AMD or other competitors will.
As a pattern of growth, few natural systems can sustain exponential doubling for long. Resource constraints, environmental degradation, or other natural limits inhibit growth. These limitations have long led critics to suggest that Moore's law was unsustainable, but they overlooked that this is a human-created technology based on the fundamental properties of silicon, not an organic phenomenon. The technological implications of Moore's law are not all sweetness and light. The computer you bought two years ago is now worth less than half of what you paid for it – assuming that you can find anyone who would want to buy it in your community. The only real options are to donate it or recycle the components. There is a planned obsolescence associated with Moore's law that is very good news for chip makers, computer manufacturers, and software producers – and not such great news for consumers. We will return to this aspect of Moore's law below, but it is worth contemplating in terms of developing a sense of critical literacy discussed in the introductory chapter.
The Prediction
In 1965, then-Fairchild Semiconductor executive Gordon Moore published a short article in the April issue of Electronics magazine entitled “Cramming more components onto integrated circuits.”4 In the article, Moore predicted that within a decade (by 1975), evolving silicon chip technology would permit the fabrication of integrated circuits (ICs) with 65,000 components (transistors) on a single chip. Given the state of IC manufacturing in 1965, his then-startling prediction suggested that the number of transistors on a chip would double each year in the decade between 1965 and 1975. Moore included a table (see Figure 2.3) that featured a logarithmic scale demonstrating this doubling of components on a chip from 1962 to 1965 and then extending this plot into the future. I have reversed the X and Y scales in the version in Figure 2.3 (with time on the Y scale) for the sake of clarity. Note that this calculation was based on just four confirmed data points (1962 to 1965), and was quite a bold prognostication given the predicted doubling of components at yearly intervals. Yet Moore's prediction for this remarkable technological feat proved to be prescient, even if the doubling intervals were to be closer to 18 to 24 months.
Figure 2.3 Moore's law re-plotted. Source: Modified by the author after original in Electronics, 38/8 (April 19, 1965).
Three years later, Moore left his position of director of the research and development laboratories at Fairchild to start a new company with partners Robert Noyce and Andrew Grove. Its name was short and memorable – the Intel Corporation. In 1975, Moore revised the time frame for chip evolution from one-year intervals to two years in a speech he gave to the Institute of Electrical and Electronics Engineers (IEEE).5 For several decades Moore modestly declined the honor of having the law named after him and attributes the name to California Institute of Technology computer scientist Carver Mead.6 It was to provide a prescient method of reference for the exponential growth in the power of integrated circuits over the following 40 years. This doubling phenomenon also applies to memory chips such as in flash drives,7 and has proved accurate for the microprocessors that are at the heart of all personal computers. Computer users understand and appreciate the improvements in processing speed in CPU chips, especially those developed in the last two decades. Other uses of IC technology are less obvious. Today's automobiles, for example, have a number of computer chips that govern critical functions such as fuel injection, safety features, and electronics that can sync to a mobile phone for hands-free use. Many models include a wireless keyless entry and ignition system that uses digital technology to allow access to the vehicle and the ability to start it. In high-theft urban areas this is an important feature to consumers, despite the added cost. If your vehicle is stolen, electronic devices hidden inside can enable police to track and recover your car. At the time of Moore's prediction, this technology was imaginable only in James Bond films.
Implications for Computing and the Digital Universe
Computer scientists commonly refer to “ubiquitous computing” to describe a world that is filled with “intelligent” devices. The increase in integrated circuit speed and power, combined with the dramatic drop in price per transistor, has made it possible to embed powerful chips in almost every device or tool that uses electricity. These embedded devices make it possible to add a remarkable variety of intelligent functions to what were previously “dumb” tools and appliances. The telephone is an ideal example. What was previously a very simple device that could be used intuitively by raising the handset to one's ear and then dialing the number with a rotary wheel or a keypad is now a much more complex instrument. My camera-equipped, quad-mode mobile phone that is also is a digital video player came with a 79-page instruction book. In the future, mobile phone users may have to take a short course in phone feature programming to learn how to use all the functions built into their mobile phone/computer/camera, not to mention the thousands of downloadable apps available.
There was a time when a person could walk into someone's home that they had never visited before and easily make a phone call, turn on the television, or perform a simple task such as boiling a kettle of water. We are confronted today by appliances with astonishing capabilities and with equally complex operational learning curves. I would like to suggest a term that describes this trend in the evolutionary design of previously simple-to-operate appliances – “complexification.” The future will see greater applications of artificial intelligence (AI) in product design to ease the stress on users, but as the cliché states, “there is a great future for complexity.” The challenge for engineers and product designers in coming decades will be to create devices that have great functional power, but are also easy to operate.
The implications of Moore's law for citizens of nations that use advanced digital technology will be significant in the future. Since Internet access is available to 25 percent of the global population of over seven billion, this includes a significant portion of humanity.8 Chip performance will increase while device prices will continue to fall. Storage of digital content on chips is now so cheap that electronic devices can have enormous storage capacity, especially phones and cameras. Chips will be embedded in a wide range of products that will have remarkable levels of intelligence. The complexification of the telematic world will increase at a steady pace, with happy consumers if these devices are easy to use and maintain, and not so pleased if they are not.
In addition to complexification, concern over the diminishment of privacy in this digital universe will become a significant issue in many nations of the world. With cameras embedded in every mobile phone and surveillance systems observing almost every commercial transaction, there are already well-publicized concerns about the negative effect on personal privacy. Many health clubs in the United States have banned mobile phones after publicity about cases where less-than-scrupulous club members took photos in locker rooms and then distributed them online. We will examine these and related digital privacy issues in Chapter 11.
Technological Determinism
Technological Determinism is a point of view that a society's technology determines its history, social structure, and cultural values. It is a negative term that has been used to criticize those who credit technology as a central governing force behind social and cultural change as overly “reductionist.” Author Thomas Friedman, in his book The World is Flat (2005), freely admits to being a technological determinist, stating that “capabilities create intentions” in regard to the role that technology plays in shaping how we live.9 Examples he cites are the Internet facilitating global e-commerce, and work-flow technologies (and the Internet) making possible the off-shoring and outsourcing of disaggregated tasks around the world. Friedman states:
The history of economic development teaches this over and over: If you can do it, you must do it, otherwise your competitors will [and] . . . there is a whole new universe of things that companies, countries, and individuals can and must do to thrive in a flat world.10
It is rare to find an observer of modern life willing to go on record in this regard, and I commend Friedman's courage in doing so. His perspective is worth our critical consideration. While it is clear that a wide range of factors influence social change, including culture, economics, and politics, among many others, Friedman advances technology to a privileged position due to its ubiquity in contemporary life, and he is correct in his assessment that “capabilities create intentions.” The development of the MP3 compression format for music files makes a good case study. When recorded music was only available on vinyl records, there were few options available for copying songs. As technology evolved, one could make a cassette tape of a record, but the copy was of poor quality and one had to fast forward and rewind the tape to find a desired song. Once digital technology appeared with the advent of music on compact discs, users could “rip” individual songs onto a computer's hard drive as digital files.
Copyright holders such as record companies weren't immediately concerned since users had to buy the CD to copy the music. However, with the rapid spread of the MP3 file format11 users of this technology developed large libraries of songs in this format. It wasn't long until a company, Napster, developed a unique technology for users of their service to copy music files to their own computer from another user that had the desired songs. Then another user could copy it to his or her computer, and so on. By the time the recording industry sued to shut down Napster and similar services, the genie – and the music – were out of the bottle. Without the widespread adoption of the MP3 digital file format and the development of successful peer-to-peer (P2P) file-sharing technology, music piracy would not have been as simple and easy to accomplish.
The legal system and related government legislation are almost universally reactive to technological innovation. Digital technology industries are developing innovations at speeds linked to Moore's law, and the legal system struggles unsuccessfully to keep pace. Despite court decisions that shut down Napster (until they adopted a fee-for-music model) and similar P2P services, US music industry sales peaked at $14.5 billion in 1999 and declined to $10 billion in 2008.12 Paid digital downloads have increased since 2005, but part of the overall reduction in revenue to the US music industry can be attributed to continued widespread file-sharing by music fans. Another trend that is negatively affecting music sales is the streaming of music on Internet sites such as Pandora, Spotify, and Imeem.13 Why buy music if you can listen to hundreds of diverse genres online for free?
Despite the technology-driven patterns in IC manufacturing and music piracy, there are problems with the perspective that technology itself determines adoption. The primary concern with adopting a worldview that a society's technology “determines its cultural values, social structure or history” is that it is inherently reductionistic. Some social scientists would argue that the determinism arrow should flow in the opposite direction – that cultural values, social structures, economics, and history determine which technologies are created and adopted. This view, while more comprehensive, fails to give sufficient weight to the unforeseen consequences of the diffusion of new technologies. These technologies are not created in a social vacuum – many are only introduced after years of research and development driven by detailed economic analysis of potential markets. The complication arises from the unintended consequences of the use of the new tool, product, or service. The irony is that, short of the unlikely near-term development of time travel, we cannot know what these unforeseen consequences might be. Nanotechnology, one of the key technologies that are facilitating the creation of ever more powerful CPUs on a chip, has raised questions about its safety when combined with dramatic advances in genetic engineering and biotechnology.14 We'll analyze these concerns in Chapter 14 on the future of the digital universe.
The Rise of Nanotechnology and the Future of Moore's Law
What is the future of Moore's law? How much longer can it be sustained in the face of the fundamental laws of physics? Many scientists have predicted the imminent death of Moore's law over the past 20 years, stating that there are fundamental physical limitations to how many small circuits can be compressed on a chip before current leakage (and related heat build-up) cause it to fail to function as designed. Gordon Moore acknowledged these limitations in 2005:
In terms of size [of transistor] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that far – but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.15
For now, the development of nanotechnology has extended the life of Moore's law by developing methods for the creation of ever-smaller circuits. Nanotechnology is the design and production of devices (and systems) at a scale that strains human comprehension. Dimensions are measured in nanometers – eight to ten atoms equal one nanometer. At this scale, a human hair is about 70,000 to 80,000 nanometers in width. The National Nanotechnology Initiative in the US defines it as follows: “Nanotechnology is the understanding and control of matter at dimensions of roughly 1 to 100 nanometers, where unique phenomena enable novel applications.”16
Nanotechnology and creative electrical engineering have enabled the fabrication of ever-smaller electronic circuits. Early in 2007, chip manufacturer Intel announced that it had succeeded in developing an innovative type of integrated circuit that utilized new metallic alloys that facilitated the creation of very tiny circuits on a chip.17
