154,99 €
For the last 50 years, the power of integrated circuits has continued to grow. However, this performance will end up reaching its physical limit. What new ways will then be available to develop even more powerful and up-to-date systems? This book introduces the principles of quantic computing, the use of nano-tubes in molecular transistors and ADN computing. It suggests new fabrication methods for the 21st century and introduces new architecture models, ranging from the most conventional to the most radical. Using a chronological theme, it explains our unavoidable entry in the nano-device world: from the 1948 transistor to the microchip. It concludes by anticipating the changes in daily living: investments, impact on coding activities, nanocomputing systems implementation and IT job mutation.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 520
Veröffentlichungsjahr: 2013
Foreword
Preface
Acknowledgements
Introduction
Chapter 1. Revolution or Continuity?
1.1. Ubiquity and pervasion
1.2. From the art of building small — perspectives of nanoproduction
Chapter 2. The Rise and Anticipated Decline of the Silicon Economy
2.1. 40 years of global growth
2.2. From sand to the chip, the epic of semi-conductors
2.3. The fatality of Moore’s Law: “the wall”
2.4. Beyond silicon — from microelectronics to nanotechnologies
Chapter 3. Rebuilding the World Atom by Atom
3.1. Manipulation on an atomic scale — the scanning tunneling microscope
3.2. From the manipulation of atoms to nanomachines — the concept of self-assembly
3.3. From the feasibility of molecular assemblers to the creation of self-replicating entities
3.4. Imitating nature — molecular biology and genetic engineering
3.5. From coal to nanotubes — the nanomaterials of the Diamond Age
3.6. Molecular electronics and nanoelectronics — first components and first applications
Chapter 4. The Computers of Tomorrow
4.1. From evolution to revolution
4.2. Silicon processors — the adventure continues
4.3. Conventional generation platforms
4.4. Advanced platforms — the exploration of new industries
Chapter 5. Elements of Technology for Information Systems of the New Century
5.1. Beyond processors
5.2. Memory and information storage systems
5.3. Batteries and other forms of power supply
5.4. New peripheral devices and interfaces between humans and machines
5.5. Telecommunications — a different kind of revolution
5.6. The triumph of microsystems
5.7. Is this the end of the silicon era?
Chapter 6. Business Mutation and Digital Opportunities in the 21st Century
6.1. Towards a new concept of information technology
6.2. Ubiquitous information technology and the concept of “diluted” information systems
6.3. Highly diffused information systems — RFID
6.4. New challenges for web applications in a global network of objects
6.5. The IT jobs mutation
Conclusion
Bibliography
Index
First published in France in 2007 by Hermes Science/Lavoisier entitled “Nano-informatique et intelligence ambiante”
First published in Great Britain and the United States in 2008 by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd6 Fitzroy SquareLondon W1T 5DXUKJohn Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.iste.co.ukwww.wiley.com© ISTE Ltd, 2008
© LAVOISIER, 2007
The rights of Jean-Baptiste Waldner to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Cataloging-in-Publication Data
Waldner, Jean-Baptiste.
[Nano-informatique et intelligence ambiante. English]
Nanocomputers and swarm intelligence / Jean-Baptiste Waldner.
p. cm.
Includes bibliographical references and index.
ISBN-13: 978-1-84821-009-7
1. Molecular computers. 2. Nanotechnology. 3. Quantum computers. 4.
Intelligent agents (Computer software) I. Title.
QA76.887.W3513 2008
006.3--dc22
2007045065
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN: 978-1-84821-009-7
“We need men who can dream of things that never were.” It is this sentence spoken by John Fitzgerald Kennedy in 1963 which has undoubtedly influenced my industrial career. Whether it be everyday electronics for the general public, or the components industry or even the world of video and imagery, the technological innovations which have come into use these last few years have greatly modified the competitive market and ways of living. Some scientists talk of technological tsunamis because of the resulting economical and social stakes, which are so high. For the industrialist the main challenge is size, and the idea is to continually innovate, creating smaller and smaller objects with improved performances. Combining research with industry has no longer become a necessity but rather a real race against the clock in order to remain a leading force in this domain. Development strategies, conquering market shares, increases in productivity and ground-breaking ideas are now more than ever linked to the coordination of tasks which join research and industry.
The most staggering technological innovations of these last few years (those which have led to real changes in the job market, and those which have changed our everyday lives) were not based on the technology available at the time but were based on anticipation, i.e. looking ahead into the future. There are some striking examples that come to mind: from the era of the video recorder, today we have all adopted the DVD player in record time discovering video on demand and downloads, which have opened the way towards the dematerialization of physical supports. From the era of silverhalide photography, today we are all familiar with digital photography, using the computer as the central storage base and the central base for exchanging our documents. Despite what anyone says, we have entered into a virtual world where it is no longer a question of physical support but of speed and bandwidth. All of this would not have been made possible if certain scientists had not dared to think differently.
It could be believed that once an innovation is launched and in the time that is taken for its economic democratization as well as its acceptance by consumers, researchers and industrialists would have the time for a little break. However, this is not the case. As soon as the technological innovation appears on the market, we are already dreaming of a new, more advanced innovation which will be one stage more advanced in terms of speed, ease of use, comfort and cost. We want to make all of this possible while also reducing the size of the object. We are looking for a groundbreaking opportunity which will open the doors of a new world to us. We are dreaming of a nanoworld.
The nanoworld is the world of the infinitesimal with exponential capacities and unexploited possibilities. Let us imagine a world where technology is no longer what it currently is. Processors have disappeared, replaced by tiny living molecules capable of managing large amounts of data; these molecules are also self-managing and have the ability to recreate themselves. A world where the alliance of biology, chemistry and physics would lead to advances in technology, which up until now have been unimaginable, like dreams or science fiction. This world is not far away. The beginnings are already taking place; the largest nations and the largest research laboratories are already regularly working on nanotechnology.
Imagine what the impact on industry would be. Will nanotechnology be the object of all our desires? Will it modify our economic models and our ways of life? Will it change the competitive market forever? Will industrialists be able to manage the change in jobs which will be brought about by this new technology? We are unable to predict whether or not by 2020 the innovations arising from the arrival of nanotechnology are going to restructure all sectors of industry. Will we be dressed in nanoclothes with unfamiliar properties? Will we travel around in nanocars with extremely resistant materials and pneumatics with an ecological engine and with renewable energy? Will production factories become molecular factories where we will see nanorobots? One thing is certain: nanotechnology is going to bring considerable improvements to the fields of computing, biotechnology and electronics. Thanks to more resistant and lighter materials, smaller components, new data storage systems and ultra-rapid processing systems, human performances are going to increase. We start off with simple isolated molecules and basic building blocks and end up with staggering technological advances which open up a new horizon, a new world — the nanoworld.
From this point on, the issue of the management of information combined with modern technology is going to expand. How are we all going to manage this way of life in our professional lives as well as in our everyday lives? How are we going to prepare for the management of new, more open information systems? It is at this level that the senior executive role becomes more and more important. The managers of today will become the managers of tomorrow by managing new ways of thinking, new working tools and new applications. In all sectors, computing, industry or even services, managers will have to accept the challenges that size represents. They will be the key players in the transition towards the nanoworld, they will be the founders of the revolution.
Didier TRUTT
Senior Executive Vice-President of the Thomson Group
“In the year 2000 we will travel in hypersonic airplanes, people will live on the moon and our vehicles will be fueled by atomic energy.” As young children in the 1960s, this is how we imagined this symbolic milestone.
We believed that the 21st century would open a bright future brought about by beneficial technology, technology which had already been introduced to us at the time.
Yet, the year 2000 has now passed. The 21st century is here but this vision, which seemed to be shared by many people, almost belongs to kitsch iconography. Things have not changed in numerous areas: we still drive vehicles equipped with internal combustion engines, whose technology is more than 110 years old, airplanes travel once again at more economical speeds, there are energy crises, and the fear which nuclear energy raises has restricted any initiative for new applications — although high-speed trains draw their energy from nuclear fission.
The end of the 30 year boom period after World War II marked the end of major innovative and exploratory programs: the conquest of space was limited to the nearby planets, while Aerotrain, Concorde and other vertical take-off and landing airplanes were sacrificed for profitability. It became difficult to distinguish between scientific and technical vocations. Would technological perspectives become blocked? Would our research laboratories and departments become short of productive and profitable ideas in order to exclusively invest in quantative hyper-productivity that some people are already interpreting as destined to compensate for a lack in innovations?
However, ignoring the technological progress of the last 40 years would be iniquitous: the evolution of computing and of digital technology is conclusive evidence of the progress which has been made. From a few dozen reliable machines and with relative progress, we have entered into a market which is made up of billions of units. Components of machines have been reduced in size by a factor of 10,000 and their cost has decreased by the same proportions (a gigabit of memory is 5,000 times less expensive today than it was in 1970). Computers are everywhere nowadays, to the point where we no longer notice them when they are present.
Progress (the most envious industrialists will talk of simple industrial development) has continually been made by exploiting and perfecting the methods and processes of the most capitalist sector in history: the silicon industry. This is an industry whose evolution seems so predictable, so inevitable, that it has been characterized by a 40 year old exponential law, Moore’s Law, which states that the number of transistors that make up microprocessors doubles every two years because these transistors become twice as small in the equivalent timescale.
However, this growth cannot go on forever and at this rate the limit of the performances of silicon components will be reached in 10-12 years at most. At that time, the current manufacturing processes will have arrived at the extremes of the world of traditional physics, the world of ordinary objects possessing mass, size, shape and contact properties. We will have arrived at a domain where the laws of quantum physics reign, laws which are so out of the ordinary and so different that they will call into question our industrial and technical know-how.
What will then be the perspectives for the development of our computers? What will the world of computing become in 15 years’ time?
The anecdote about the year 2000 reminds us how dangerous it is to make this type of prediction. Nevertheless, this issue is far from being a simple puerile game. In numerous sectors of the economy and for many companies, 10, 12 or 15 years constitutes a significant time reference for the world of computing. Today, for example, the development process of the pharmaceutical industry considers information systems as the key to success in the development of new medicines, which is expected to have a project plan of between 10 and 15 years. This means that economic success will rest on the ability to understand and domesticate trends and innovations regarding technology, systems and organization. The aeronautical industry also considers project plans of 10 to 15 years and considers computing to play a vital role in this sector of the economy. We could also add the car industry and many other services such as banking and insurance for which information systems are no longer back-office production activities but rather real working partners. This timescale of 10 to 12 years also translates into more jobs available on the job market. We have only too often taken part in mass production campaigns in directing young people towards new industries, which five to seven years later proved to be too late in relation to the needs of the market.
As far as the next 10 years are concerned, by continuously pushing the limits of technology and the miniaturization of components further and further, it is the concept of the information system which changes: from processing, which is exclusively centered on the user, computing is becoming swarm intelligence. Since this technology currently makes it possible to produce tiny computers, it gives almost all objects from everyday life the capability of starting a spontaneous exchange of information without any interaction with the user.
In approximately 10 years computing will have become fundamentally different: it will not change gradually as has been the case for the last 15 years, but it will change dramatically for reasons which affect its deepest roots. Semi-conductors have been favored in the progress of computers over the last half-century. The simplest principle of thought is as follows: since it has been admitted that in 10-12 years, when hardware technology has evolved to its physical limits, then either our computers will have definitively reached their asymptotic power and future progress will be linked to innovative applications (this is the conservative or pessimistic view), or a replacement technology will emerge which will enable a joint continuation of hardware performance and progress in terms of application.
If the conservative point of view is acceptable on an intellectual and philosophical level, it will be difficult to accept for one of the most powerful sectors of the economy: to consider a ceiling in terms of the performance of components would mean entering into a market of hyper-competition where the only solution would be fatal price erosion. This is a situation similar to that of cathode-ray television screens and the textile industry, but with more serious consequences for the global economy given the volume of activity.
The hope therefore rests within replacement technology and the nanotechnology industry which will make the molecules itself. These molecules will be the base elements of future electronic circuits. For both economical and technical reasons, their development is considered as inevitable.
The first magnetic components (i.e. on the scale of the molecule where the rules of quantum physics reign) have already been created in laboratories. Conscious of the vital character of this technology, the large operators of the silicon industry are also amongst the leading pioneers of this domain. They are exploring carbon nanotubes in particular.
However, what can be created on the testing beds of laboratories on a unitary scale is still far from being applied to complex circuits and even less so as regards any possibility of industrial production. The main reason is the difficulty in the integration of several hundred million of these devices within the same chip like current microprocessors (remember that by 2015 there will be 15 billion transistors integrated onto one single chip).
Nanotechnology, in the historical sense of the term, does not simply consist of creating objects with molecular dimensions. The basic idea of the principle of self-organization into complex systems comes from life sciences. This means that the basic building blocks, according to the destination of the device, are capable of assembling themselves from just one simple external macroscopic instruction into a more complex device. This is the main challenge and the source of polemic debates1 which scientists and technologists have to deal with today.
Given that nature can construct such machines and that molecular biology and genetic engineering have already investigated these principles, another path has opened for the computers of the future: that of organic molecular electronics which exploits a living material, such as a protein molecule, and reuses it in an artificial environment in order to ensure a processing or memory function.
The make-up of computers of the next decade is experiencing a transition period where everything is undergoing a process of change. New processors stemming from the conventional generation of semi-conductors are progressively going to be composed of structures coming from organic technology or from the first molecular components. Memory will follow the same path of miniaturization on a molecular scale. Mass storage devices will store information in three dimensions (in proteins or in crystals) which up until now has been stored in two dimensions, on the surface of an optical or magnetic disk. However, these magnetic disks, just like silicon components, will continue to be improved by exploiting quantum phenomena. Mobility has introduced another requirement: that of energetic autonomy. Chemical batteries will be produced flat, just like a sheet of paper, lighter and more powerful. The new generation of swarm and communicating devices will explore new approaches: supplying information using ATP molecules2 (similar to living things), micro-internal combustion engines, or even nuclear micro-batteries which are far removed from conventional batteries. These swarm micro-objects, which are present in the diversity of the real world, are capable of communicating amongst themselves and with their users in the most natural way possible. This means that the interfaces between the user and the machine will no longer be limited to keyboards, mice and screens but will use our five senses. Certain interfaces will not use the five senses since the neural system and telepathic control are already a reality. It is not the eye that sees, but the brain.
The vast network of tiny and heterogenous objects which make up the new generation of distributed systems enforces a new way of thinking in relation to software. Jobs in the world of computing will have to adapt to this. The development and maintenance of applications are no longer restricted to a finished set of programs with long lifespans, but have to take into account a vast perimeter of microsystems interacting with one another as well as unbelievable diversity and all in a context where instability and evolution rule, just like in the real world.
This book is devoted to the ambitious question of imagining what computing will be like in 15 years. How will information systems evolve and how will jobs in the computing industry change throughout this transition period?
1 This debate amongst scientists hides another emerging debate concerning the general public. Just like GMOs, nanotechnology can be part of another controversial subject between researchers and the general public. The idea of designing tiny entities capable of reproducing themselves and which can escape from the control of their designers has been installing fear amongst people for some time now. The most futuristic practical applications illustrated in this book, which are part of current research trends, are still far from posing this type of risk. However, efforts to popularize these future applications must be undertaken if we do not want society to condemn emerging science through sheer ignorance of the real stakes and risks.
2 A reversible molecular turbine.
IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA.
IBM Zurich Research Laboratory, Zurich, Switzerland.
Atomic Energy Commission, Quantronics Group, Saclay, France.
The author would like to thank the Reading and Validating Committee for their contribution that allowed all information to be presented in a correct and objective way:
Daniel Estève, Research Director at the CNRS. Mr. Estève received the silver medal of the CNRS and the Blondel medal for his work on semi-conductors and microelectronics. He worked as the Director and Deputy Director of LAAS/CNRS between 1977 and 1996. Between 1982 and 1986 he occupied the position of the director at the French Ministry for Research. Mr. Estève obtained his PhD in 1966.
Claude Saunier, Senator and Vice-President of the French parliamentary office on scientific and technological choices.
Didier Trutt, General Director of the Thomson Group and Chief Operating Officer (COO). Mr. Trutt graduated from the Ecole Nationale Supérieure des Mines in Saint Etienne, France.
Patrick Toutain, Senior Technical Analyst at the Pinault Printemps Redoute Corporation, graduated from the Ecole nationale supérieure de physique in Strasbourg, France.
Louis Roversi, Deputy Director of EDF’s IT department (French electricity supplier), PMO Manager for EDF’s SIGRED Project. Mr. Roversi graduated from Ecole Polytechnique, ESE, IEP, Paris, France.
Etienne Bertin, CIO of FNAC Group, graduated from the ESIA, DESS in finance at Paris IX, Dauphine.
Michel Hubin, former researcher at the CNRS and former member of a scientific team working on sensor and instruments at the INSA, Rouen, France, PhD in science and physics.
Thierry Marchand Director of ERP and solutions BULL, Managing Director of HRBC Group BULL, graduated from the EIGIP.
We are on the brink of a huge milestone in the history of electronics and have just entered into a new digital age, that of distributed computing and networks of intelligent objects. This book analyzes the evolution of computing over the next 15 years, and is divided into six parts.
Chapter 1 looks at the reasons why we expect the next 10 to 15 years will be a split from conventional computing rather than a continuation of the evolutionary process. On the one hand, the chapter deals with the emergence of ubiquitous computing1 and pervasive systems, and on the other hand it deals with the problem of the miniaturization of silicon components and its limits.
Chapter 2 offers a clear chronology of the industrial technology of silicon and explains the reason for the inevitable entry of nanodevices into the market: from Bardeen, Brattain and Shockley’s transistor in 1948 to modern chips integrating tens of billions of transistors and the curse of Moore’s Law.
For the reader who is not familiar with how a transistor works or who wants a brief reminder, there is an entire section which presents the main part of the theory of semi-conductors: elements of solid-state physics, the PN junction, the bipolar transistor and the CMOS transistor. The specialist does not have to read this section. This is also a section devoted to the current manufacturing processes of semiconductor components. It is reserved for readers who wish to further their knowledge and is optional to read.
The main part of the second chapter resides in the factual demonstration of the fatality of Moore’s Law and the transition from microelectronics to nanotechnology (see sections 2.1, 2.3 and 2.4 for more information).
Chapter 3 is devoted to the introduction of nanotechnology and its application as the base components of computers.
First of all, it introduces the atomic microscope and the atomic force microscope, and the first experiments of the positional control of an atom with the help of such a device. The impossibility of constructing such a basic macroscopic machine by directly manipulating atoms is also explained. This would involve assembling billions of billions of atoms amongst one another and would take billions of years to complete.
The solution could reside in the self-assembly of nanomachines capable of self-replication. This is the approach of Eric Drexler’s molecular nanotechnology (MNT) which is the origin of the neologism nanotechnology. A section is devoted to the polemic concerning the reliability of molecular assemblers in the creation of self-replicating entities (section 3.3). This section stresses the precautions which still surround the most spectacular and also most unexplored industries. Nanomachines are part of a real interest, especially as far as institutional circles (universities, research laboratories) are concerned. More and more leaders from the business world have been interested in nanomachines since they were first considered as a strategic sector in the USA. Section 3.3 is supplementary and if the reader opts not to read this section it will not affect their understanding and preconceptions relating to MNT.
The construction of self-replicating and artificial nanomachines is a topic which is still very much discussed. However, an alternative already exists: the approach used by living organisms and which has already been largely explored by molecular biology and genetic engineering. In fact, the powerful device that is nature enables the cell mechanism to assemble, with the perfection2 of atomic precision, all varieties of proteins; an organized system of mass production on a molecular scale. Section 3.4 explores the construction of self-replicating nanomachines which are used effectively by nature. This section also develops another approach to nanotechnology which is derived from genetic engineering.
Section 3.5 introduces carbon nanotubes and their electronic properties. These extraordinary materials, which would make a car weigh about 50 lbs, are the first industrial products stemming from nanotechnology.
At the end of this chapter, section 3.6 is devoted to the introduction of some remarkable basic devices which function on a nanometric scale and which are likely to replace the semi-conductor components of current computers. The theoretical and experimental reliability of computing which is based on molecular components are also presented. The CFNET (Carbon Nanotube Field Effect Transistor), hybrid mono-molecular electronic circuits, an organic molecular electronic device using a living protein to store a large amount of information and finally spintronic semiconductors will all be dealt within this section.
Finally, quantum boxes and the phenomenon of quantum confinement will be discussed. This quantum confinement approach is currently more a matter for fundamental physics rather than the development of industrial computers.
Chapters 4 and 5 are meant to be a summary of the major technologies which the computers and systems of the next decade will inherit. These two chapters can be read in full or according to the interest of the reader. As the sections are independent, they can be read in any order.
Chapter 4 is devoted to processors and their evolution. This chapter introduces two analytical views. First of all, we address the standard outlook of between one and five years (microprocessor structure: CISC, RISC, VLIW and Epic, the progress of photolithography, distributed computing as an alternative to supercomputers, etc.), i.e. the perspective which conditions traditional industrial investments.
The chapter then introduces a more ambitious perspective which sees a new generation of computers with radically different structures. This vision with more hypothetical outlines introduces systems which may (or may not) be created in the longer term, such as the quantum computer and DNA processing.
Chapter 5 widens the technological roadmap from the computer to the entire set of peripheral components which make up information systems. It is structured according to the hierarchy of the base components of a system. Other than processors, we are also interested in memory, mass storage devices, dominating energy supply devices with the notion of mobility or distributed computing, and in the man/machine interface. There is a section which is specifically devoted to microsystems whose contribution to ubiquitous computing is vital. The technologies which have been mentioned above can be applied to the industrial present (notably with new semi-conductor memories, Giant Magnetoresistance (GMR) hard disks, and Tunnel Magnetoresistance (TMR) hard disks, voice recognition and new visual display devices) as well as future possibilities (holographic memories, memories created from atomic force microscope (AFM) technology, molecular memories and telepathic interfaces, etc.).
Chapter 6 deals with the changes that these new resources will introduce in the business world and in our everyday life. It takes into consideration the issues at stake in the business world: a break or continuity of investments, the impact on activities using coding, and the implementation of information systems in companies, economic opportunities and changes in business and jobs, etc.
Section 6.1 introduces the historical evolution of computing from the first specialized, independent systems through to ubiquitous systems.
Section 6.2 shows how we move from ubiquitous computing to an ultimate model of dilute computing (i.e. becoming invisible): tiny resources communicate with one another and together they resolve the same type of problem as a large central mainframe. It also shows how the relationship between man and machine has evolved over three characteristic periods.
Section 6.3 introduces one of the first applications considered as pervasive, a precursor to dilute systems: Radio Frequency Identification Systems (RFID) and the Internet. We will also show how this global network could structure the supply chain of the future which would concern the Auto-ID initiative. The section concludes with the sensitive issue of the potential attack on a person’s private life which this type of application could lead to.
Section 6.4 introduces the new challenges of a global network, i.e. how it is possible to make such complex and heterogenous networks function on a very large scale. Interconnected networks, which would have billions of nodes, and which would be associated with so many diverse applications, infrastructures and communication protocols, pose three fundamental problems. First of all, there is the problem of security, secondly the problem of the quality of service and finally, the size or the number of interconnected objects. However, such a structure would also consist of moving from a central system to what is known as swarm intelligence. In order to effectively use these swarm networks as new applications, the rules of software coding would have to be reinvented.
Section 6.5 mentions the unavoidable change in businesses and jobs in the computing industry, which the arrival of these new structures will lead to over the next decade. These new software structures will completely change the way in which we design, build and maintain complex applications. The following issues will also be mentioned in this section: the modern concepts of agile development, new opportunities regarding professions in the computing industry and the sensitive issue of off-shoring since distributed structures make it possible to operate these activities in regions where costs such as those of employment and manufacturing are more attractive.
After an analysis of the evolution of the business world in the computing industry, as well as an analysis of the evolution of structures in the computing world, section 6.5 deals with what the essential reforms in jobs relative to information systems should be; what will the new professional industries introduced by this technological evolution be like, and finally what jobs will no longer be viable?
The book concludes by reformulating the reasons behind the joint emergence of the material revolution (post-silicon and post-nanometric devices) and the concept of swarm intelligence, i.e. a complete and simultaneous change which is not only present at the center of materials technology, but also in algorithms which will unite this collective intelligence and in the perception that the users will have of these new systems. This conclusion introduces the real theme of the book: processing the next generation of computers which will enable us to understand the impact of change that these machines will have on methods used by humans and the manner in which humans manage these machines. We benefit from the understanding acquired from available technology in order to be able to debate the necessary change of the CIO (chief information officer)’s role: a job which from now on is no longer development and production oriented, but oriented towards anticipation, vision, mobilization and integration.
1 Ubiquitous means micro-organisms that are found everywhere. Adapted to computing, the term refers to an intelligent environment where extremely miniaturized computers and networks are integrated into a real environment. The user is surrounded by intelligent and distributed interfaces, relying on integrated technologies in familiar objects through which they have access to a set of services.
2 It would be appropriate to talk about almost-perfection. It is the absence of perfection in the reproduction of biological molecular structures which enables nature to be innovative and to become everlasting. Without these random imperfections, most of which reveal themselves as being unstable and non-viable, no sort of evolution would have been possible, and man would probably have never appeared.
The 20th century was the century of the electricity fairytale. Throughout its domestication, the boom in electrical engineering and that of the telecommunications industry marked the beginning of a continual period of economic growth.
With the internal combustion engine and the first phases of the automobile industry, electrical engineering was the main force behind the economic development of the Western world up until the end of World War II.
The second half of the 20th century saw a new shift in growth regarding electrical engineering, especially with the advent of the transistor which opened the way to the race to the infinitesimal.
The invention by Bardeen, Brattain and Shockley in 1948 and the introduction of binary numeral principles in calculating machines at the end of the 1930s paved the way for the silicon industry, which has been the main force behind global economic growth for more than 40 years now.
Since 1960, the size of electronic components has been reduced by a factor of 10,000. The price of these components has also collapsed in equivalent proportions; from $101,000 in 1970 you can now purchase a gigabyte of memory for approximately $27 today. No other sector of industry can pride itself on such productivity: reducing unitary production costs by a factor of one million in the space of 30 years. The story does not stop there.
Digital microelectronics has become a pillar in virtually all sectors of industry: the automobile industry was considered as the “Industry of Industries” throughout the 30 year boom period after World War II in France, and more than 90% of new functionalities of modern cars come from microelectronic technologies. Today, electronics count for more than one-third of the price of a vehicle.
However, the most significant domain has, without a doubt, been the development of the personal microcomputer. With the arrival of large information systems we saw the birth of digital electronics. These sizable structures use large central systems whose usage had been very restricted. There were only a few specialists who worked in this field who really knew how to use these systems. We had to wait for the arrival of the PC for the general public to have access to the world of informatics and information technology. This mass generalization of laboratory-specialized technology for everyday use has created many industrial opportunities; this model, which is conditioned only by technical performance up until now, should, for the moment, benefit both cost and performance. Technology entered an era of popularization just like all other inventions: from electrical energy to the automobile and aviation. A new stage in this process was reached with the digital mobile phone. For the moment, no other economic domain can escape from this technology.
Pervasive technology is technology that spreads on a global scale, technology which “infiltrates” or becomes omnipresent in all socio-economic applications or processes. Pervasive is a word with Latin roots and became a neologism towards the end of the 1990s. The word pervasive minimizes the somewhat negative connotation of its synonym “omnipresent” which can install fear of a hyper-controlled Orwellian universe. The word pervasive is reassuring and positive giving the notion of a beneficial environment.
A new chapter in the evolution of both science and technology has been written following a process which from now on will be referred to as being traditional. This evolution includes discovery, accessibility of such high-tech applications for the elite only, popularization of such high-tech applications to all social classes, the emergence of products coming from this technology and the growth of products that this technology will create. For example, with regard to the first main usage of electricity we automatically think of providing light for towns. The technology, which is available today, has become a more decentralized universal source of energy (the battery, miniaturized engines) and finally has become so widespread that it can be used for anything and be accessed by everyone.
Omnipresent electronics in our daily lives has become omnipotent in the economy with a growing share in global GNP. At the beginning of the 21st century, microelectronics represented €200 billion in the global GNP and in the silicon industry. This activity generates more than €1,000 billion in turnover in the electrical industry and some €5,000 billion in associated services (bringing the global GNP to some €28,000 billion).
The share of this industry has, in all likelihood, led to a further growth with the phenomenon of pervasive systems, which integrate more and more microelectronic technology into everyday objects. The silicon industry has probably got a good future in front of it and has no reason to be doubted over the next 12 to 15 years.
Throughout the 20th century, the rate of evolution accelerated. The “bright future” promised by modern society after World War II stimulated technological progress, the productivity race and doing work that demands the least physical effort. The more complex machines are also the most powerful ones and are now available everywhere. Their designers now understand that the future of technology resides in the process of miniaturization.
One of the most basic of these machines, the switch, has benefited greatly from this process and is being produced in increasingly smaller sizes. In the 1940s, following electromagnetic relays, the triode enabled scientists to join a number of switches together in complex circuits. The electronic calculator had just been born.
A very simplified description of the digital calculator would be to describe the device as a processor which carries out arithmetic or logic functions and has a memory that temporarily stores products and sub-products of these operations in the long term. The logic used for all the operations is based on the only two states that an electronic circuit can possess: 0 or 1. The switch is, in fact, a device which makes it possible to obtain an output signal of comparable nature to the input signal. The switch is to electronics what the lever is to mechanics. A lever makes it possible to have a greater force at the output than at the input, or even a greater (or weaker) displacement than that applied at the input. It is with the help of the basic switch that AND, OR and NOT gates can be created. These logic gates used in electronic circuits form the basis of all the operations carried out by our computers.
The 1960s were devoted to the advent of the transistor and semi-conductors. These new components progressively replaced cumbersome, fragile vacuum tubes which, at the time, were both time and energy consuming. Computers started to function with the help of what we can consider as chemically treated sand crystals. This was the arrival of silicon. Silicon is the eighth most abundant element in the universe, whereas on Earth it is the second most abundant element, coming after oxygen. Silicon has contributed to the development of humanity since the beginning of time: from flint to 20th century computers including the most diverse everyday ceramic objects.
All solids are made up of atoms that are kept in a rigid structure following an ordered geometric arrangement. The atom is made up of a positively charged central nucleus, surrounded by a negatively charged cloud of electrons. These charges balance each other out in such a way that the atom is electrically neutral. A crystal is a regular arrangement of atoms bonded together by a process known as covalence, i.e. the sharing of electrons. It is the bonds which give the solid its cohesiveness. The electrons, which take part in the cohesiveness of the solid, are bonded electrons. The excess electrons, if any exist, are said to be “free electrons”. The detached electrons can travel freely around the interior of the solid. They allow for the easy flowing circulation of electrical current which characterizes materials with conducting properties.
On the other hand, insulators do not have any (or have very few) free electrons (we will talk about electrons in the conduction band) and in order to make an electron pass from the valence band (i.e. a bonded electron which assures the rigidity of the solid) towards the conduction band, a significant supply of energy must be supplied which will bring about the destruction of this insulator.
Figure 2.1. The valence band, conduction band, forbidden band and the Fermi level
In between insulators and conductors, there exists a type of solid which possesses a medium level of resistivity. These are known as semi-conductors such as germanium, selenium, copper oxide and of course silicon. In a semi-conductor, the conduction band is empty at the temperature of 0 degrees Kelvin: therefore, the semi-conductor is an insulator. However, the difference with an insulator here is that a moderate supply of energy can make the electrons migrate from the valence band to the conduction band without harming the body and without an increase in temperature. Whilst an electron is free, it leaves behind it an extra positive charge in the nucleus of the atom and also leaves a place available in the bond between two atoms.
This available place acts as a hole which a neighboring electron of a crystalline lattice would have the possibility of occupying, freeing yet another hole which would be occupied by a new electron and so on.
This is the phenomenon of the propagation of bonded electrons. Double conduction is also possible such as the double conduction of free electrons traveling in the conduction band and that of the propagation of holes/bonded electrons in the valence band.
Figure 2.2. An intrinsic semi-conductor, an N-doped semi-conductor and a P-doped semi-conductor
The number of free electrons in a crystal can be increased by adding an impurity made up of heterogenous atoms (one impure atom or one doping atom for one million atoms in the intrinsic semi-conductor).
Cohesion of the pure silicon crystalline lattice is obtained by the sharing of four valence electrons from the silicon atom. Imagine that some arsenic atoms, each one having five valence electrons, are introduced into the silicon crystal. One electron from each doping arsenic atom would find itself with no other electron to bond with.
These orphan electrons are more easily detached from their atoms than the electrons forming the bond between the atoms. Once free, these electrons favor electric conduction through the crystal. In reverse, if heterogenous atoms with only three valence electrons are added (for example indium atoms), then an electron is missing in one of the bonds which creates a hole in the crystal’s structure, a hole which will also lead to the spread of an electric current. In the first case, we talk about an N-doped semi-conductor because there is an excess of valence electrons in the semi-conductor. In the second case, we talk about a P-doped semi-conductor because there are not enough valence electrons or holes in the semi-conductor.
Figure 2.3. A PN conductor
The PN (or NP) junction is at the heart of current electronic components. The idea consists of creating an N-doped zone and a P-doped zone in the same semiconductor crystal. Straddled on the junction that is created between both regions, a thin transition zone is also produced. Holes move from the P region to the N region and trap electrons. Electrons move from the N region to the P region and are trapped by holes.
This very narrow region is therefore practically devoid of free carriers (i.e. electrons or holes) but still contains fixed ions coming from the doping atoms (negative ions on the P side and positive ions on the N side). A weak electrical field is established at the junction in order to balance the conductor.
