Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
This book is a venture that, as far as we know, has never been tried before. It is a more than a decennial long overview of the evolution, status and future of Information and Communication Technologies (ICT) transgressing technology to economy, sociology and its way of changing our life and of developments, which might affect our future personally and as society. The individual papers were delivered as invited keynote lectures at the the annual IDIMT Conferences (see www.IDIMT.org) from 2000 to 2017. These lectures were designed to satisfy the interested nontechnical audience as well as the knowledgeable ICT audience, bridging this gap successfully without compromising on the scientific depth. It offers an opportunity to analyze evolution, status, the present challenges and expectations over this dramatic period. Additionally the multidiscipline approach offers an unbiased view on the successes and failures in technological, economic and other developments, as well as a documentation of the astonishing high quality of technological forecasts. Seldom has a single technology been the driving force for such dramatic developments, looking at the intertwined developments as the computer becoming a network and the network becoming a social network or how information technology is even changing the way, the world changes. Economically documents emphasize the fact that the three top value companies in the world are ICT companies. Many deep-impact innovations made in these years are reviewed, with information technology enabling advances from decoding the genome to the Internet, Artificial Intelligence, deep computing or robotics to mention a few. The impact literally reaches from on the bottom of the sea, where fibre optics advancements have improved communications, up to satellites, and turned the world into a global village. Discussing the scenario of the last 25 years, we have the privilege of the presence of eye witnesses and even of contributors to these developments to which these personalities contributed and enabled these lectures. Special appreciation for their engagement and many valuable discussions goes 'in parts pro toto' to Prof. Gerhard Chroust and Prof. Petr Doucek and their teams.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 408
Veröffentlichungsjahr: 2017
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
This book is a venture that, as far as we know has never been tried before. It is a more than a decennial long overview of the evolution, status and future of ICT transgressing technology to economy, sociology and its way of changing our life and of developments, which might affect our future personally and as society.
The lectures were designed to satisfy the interested nontechnical audience as well as the knowledgeable ICT audience, bridging this gap successfully without compromising on the scientific depth.
It offers an opportunity to analyze evolution, status, the present challenges and expectations over this dramatic period. Additionally the multidiscipline approach offers an unbiased view on the ng successes and failures in technological, economic and other developments, as well as a documentation of the astonishing high quality of technological forecasts.
Seldom has a single technology been the driving force for such dramatic developments, looking at the intertwined developments as the computer becoming a network and the network becoming a social network or how information technology is even changing the way, the world changes.
Economically documents speaks the fact for itself that the three top value companies in the world are ICT companies.
Many deep-impact innovations made in these years are reviewed, with information technology enabling advances from decoding the genome to the Internet, AI, deep computing or robotics to mention a few.
The impact literally reaches reaching from on the bottom of the sea, where fibre optics advancements have improved communications, up to satellites, and turned the world into a global village.
Discussing the scenario of the last 25 years, we have the privilege of the presence of eyewitnesses and even of contributors to these developments to which these personalities contributed and enabled these lectures.
Special appreciation for their engagement and many valuable discussions goes ”in parts pro toto “ to Prof. G. Chroust and Prof. P. Doucek and their teams.
Christian Werner Loesch September 2017
The innovations, observations and analyses which are reported at conferences like IDIMT are based on the rapid growth of Information and Communication Technologies (ICT). It is caused by the dramatic and often unbelievable increase in speed and capacity of the underlying computer hardware (transistors, chips, high speed cables etc.) . Despite the dramatic reduction of the price of mass produced circuits and storage units, the explosion of the costs for the total production (factories etc.) are also growing and this reduces the production market to a few big players. The economic parameters define which development roads are to be taken and at what speed. This broader context is essential in order to understand some of the widening directions into which the technological advances will take our economy, our technical activities and our society.
Christian Loesch in 2002
Since the year 2000 a special highlight of the yearly IDIMT–Conferences has been Christian Loesch’s overviews of global technical, economic and/or business developments. In now 18 presentations Christian Loesch has provided for the participants a broad and and insightful lecture showing the greater interconnection, driving forces and hindrances for the future of ICT. Thanks to Christian’s profound knowledge and his deep understanding of the international situation he has been able to imbed our discussion within the broader context of technological innovation and economic infrastructure.
On the occasion of the 25 year celebration of IDIMT we have collected his 18 presentations and have republished them as a separate book. We want to thank Christian for his efforts in collecting the material and to presenting it to the participants of the IDIMT conferences in short but important aspects beyond their immediate field of knowledge. It has allowed us to take look behind the scenes of the computer industry .
Gerhard Chroust and Petr Doucek
Co-chairmen of the IDIMT Conferences
2017: ICT Beyond the Red Brick Wall
2016: Digitalization, Hardware, Software Society
2015: We and ICT, Interaction and Interdependence
2014: The State of ICT, Some Eco- Technological Aspects and Trends
2013: ICT Today and Tomorrow, Some Eco- Technological Aspects and Trends
2012: 20 Years IDIMT, ICT Trends and Scenarios reflected in IDIMT Conferences
2011: ICT Trends, Scenarios in Microelectronics and their Impact
2010: Some Eco-Technological Aspects of the Future of Information Technology
2009: Technological Outlook: The Future of Information Technology
2008: Technological Forecasts in Perspective
2007: 15 Years moving in Fascinating Scenarios
2006: Do we need a new Information Technology?
2005: Future Trends and Scenarios of Information Technology
2004: Information Technology: From Trends to Horizons
2003: Trends in Information Technology
2002: Safety, Security and Privacy in IS/ IT
2001: Trends in Business, Technology, and R & D
2000: Ethics, Enforcement and Information Technology
It would be difficult to overstate the impact of Moore’s Law. It is all around us, in the myriad gadgets, computers, and networks that power modern life. However, the winning streak cannot last forever. For all phenomena of exponential growth, the question is not whether, but only when and why they will end, more specifically: would it be physics or economics that raises the barrier to further scaling?
Physics has been our friend for decennia (“Dennards Law”) has now become the foe of further downscaling. In spite of all doomsday prophesies since the ´80s, we will reach, thanks to the ingenuity of physicists and engineers, the famous red wall only within the next five to ten years and it might be a second an economic wall. The third wall may be a “power” wall, not just for the well-known power problems also for a other reasons, as the proportionality of system failure and power consumption.
Some sort of “Moore’s law” can also be found in the software area, as at operating systems doubling in size every new generation or as a law of diminishing returns or leading to the increasing reluctance to accept new hard and software generations?
On the request of many we look on the emerging long term options rather than at the immediate and mid-term scenario.
We will review how ICT Industry is performing and trying to meet these challenges and preparing adequate strategies. The variety of responses is stunning and ranges from Memristor, QC, cognitive computing systems, big data, to graphene and the abundance of emerging fascinating applications which will impact our life.
Processor productivity, bandwidth, number of transistors per chip, upgrades in architecture, continue to increase further causing increasing demand on processor communications, bandwidth and transmission speed on the chip and worldwide networks.
Will review how the industry fared. Since most 2016 results will be published too late for the printing press, we will to cover them in our session but based on latest results.
Source: Marketing BrandZ
Revenue and income can only give a very vague picture, but they are providing an overall impression how some key players of the industry performed in the 2016/2015 timeframe.
Moore’s “Law” has been a synonym for faster cheaper processing power. Key contributors have been and are in a two to three years rhythm:
Performance + 30% operating frequency at constant energy
Power - 50% energy / switching process
Area size reduction -50%,
Cost -25% wafer and up to -40% scaled die
However since the end of Dennard scaling more than a decade ago, doubling transistor densities have not led to corresponding higher performance By 2020 feature size will be down to just a few nanometers leading to the transition to the economically more attractive vertical scaling
ITRS names three applications areas driving innovation:
High performance computing
Mobile computing
Autonomous sensing & computing (e.g. IoT)
The quest for an alternate technology to replace CMOS has come up with no serious contenders in the near future. “Little room left at the bottom “.
STT technology is today’s research focus. Advantages range from lower footprint, reduced writing current by one or two orders of magnitude and full scalability. STT MRAM may be the low hanging fruit we are waiting for, with spin orbit torque technology on the horizon
2.1 Nanotechnology, atomic and molecular
Nanotechnology breakthroughs pave the way for the ultra-small.
Recently published research papers highlight these as:
Single-molecule switching, which could lead to molecular computers, the discovery of two hydrogen atoms inside a naphthalocyanine molecule that can do switching, means storing enormous amounts of information and the idea of a computer comprised of just a few molecules may no longer be science fiction, but exploratory science. Such devices might be used as future computer chips, storage devices, and sensors for applications nobody has imagined yet. They may prove to be a step toward building computing elements at the molecular scale that are vastly smaller, faster and use less energy than today's computer chips and memory devices. The single-molecule switch could operate without disrupting the molecule's outer frame. In addition to switching within a single molecule, the researchers also demonstrated that atoms inside one molecule can be used to switch atoms in an adjacent molecule, representing a rudimentary logic element. [Meyer G., IBM Zurich Research Lab]
2.2 Graphene
Graphene has become one of the most shining materials for the scientific community and a popular candidate for IoT and flexible electronics;
At present information processing is split into three functions with different types of material:
Information processing: Si- transistor based
Communications: Compound semiconductor based, as InAs, GaAs, InP by photons
Information storage: Ferromagnetic metals based.
Such a division is not very efficient. Graphene triangular quantum dots (GTQD) offer a potential alternative; there is a special class of nanoscale graphene, triangular with zigzag edges meeting all three functions.
One atom thin integrated graphene circuits pose many problems to be resolved as controlling size, shape and edges with atomic precision or that graphene FETs suffer from the lack of a large band gap, therefore generating a band gap without sacrificing the mobility remains the greatest challenge for graphene. [Technology Innovation, Kinam Kim and U-In Chung Samsung advanced Institute of Technology, Giheung, S. Korea], [A. Güclü, P.Potasz and P. Hawrylak, NRC of Ottawa, Canada]
These atom-thick 2D materials could lead to a new industrial revolution for the post Si era, atomically thin tunnel transistors offer transparency with comparable performance, 2D providing wider bandwidth and cheaper integration with Si for data communication but will take five to ten years to reach the marketplace due to problems of material quality and integration. [Sungwoo Hwang et alii, Graphene and Atom-thick 2D Materials, Samsung advanced institute of technology, Suwon, S. Korea]
In view of the fact that 38% of energy consumption in data centers (2009) copper interconnects of devices between and on chips. Substitute Cu by optical interconnects resulting in a 1000-times lower attenuation could be another promising technology. [D. Stange, Jülich, R. Geiger PSI Villingen et alii., Univ. Grenoble, Z. Ikonic, Univ. Leeds UK]
2.3 Racetrack
IBM claims that its racetrack storage technology to store data in magnetic domain walls is reaching market maturity within the next years. The expected performance is impressive:
Data stored in magnetic domain walls
100 times more storage than on disk or flash
Fast r/w in a nanosecond
2.4 The Memristor
According to the original 1971 definition, the Memristor was the fourth fundamental circuit element, forming a non-linear relationship between electric charge and magnetic flux linkage. In 2011 Chua argued for a broader definition that included all two-terminal non-volatile memory devices based on resistance switching. But broader definition of Memristor could be a scientific land grab that favors HP's Memristor patents. Back to the ´60s date first description of Memristor, Today are many implementations under development.
Memristor change their resistance depending on the direction and amount of voltage applied, and they remember this resistance when the voltage is removed. Most memory types store data as charge, but Memristors would enable a resistive RAM, a nonvolatile memory that stores data as resistance instead of charge.
Memristors promise a new type of dense, cheap, and low-power memory.
What are the potential advantages of the Memristor?
One Memristor has equivalent logic function to several connected transistors means higher density and uses up much less power.
In 2010, HP labs announced that they had practical Memristor working at 1ns (~1 GHz) switching times and 3 nm by 3 nm sizes. At these densities, it could easily rival the sub-25 nm flash memory technology.
A major problem is how to make large numbers of them reliable enough for commercial electronic devices. Researchers continue to puzzle over the best materials and way of manufacturing them.
Memory fabric (HPE Labs)
3.1 “The Machine”
HPE is developing “The Machine”, the largest R&D program in the company’s history in three stages, of which it is unveiling the first. In the second and third phases the company plans to move beyond DRAM to test phase-change random access memory (PRAM) and memristors, over the next few years. HPE assigned 75% of its human R&D resources to this project. The Machine still has not arrived completely; HPE is providing a peek at progress so far.
A prototype has been on display at The Atlantic’s: Return to Deep Space conference in Washington, D.C., featuring 1,280 high-performance microprocessor cores, each of which reads and executes program instructions in unison with the others, with access to 160 terabytes (TB) of memory. Optical fibers pass information among the different components.
The Machine is defined by its memory centric computing memory driven architecture i.e. a single, huge pool of addressable memory.” A computer assigns an address to the location of each byte of data stored in its memory. The Machine’s processors can access and communicate with those addresses much the way high-performance computer nodes.
HPE’s X1 photonics interconnect module laser technology replaces traditional copper wires with optical data transfer between electronic devices. [Hewlett Packard’s Silicon Design Labs in Fort Collins, Colo. Enterprise].
3.2 Cognitive Computing, Neurocomputing and AI
IBM is taking a somewhat different track in its efforts to develop next-generation computing, focusing on neuromorphic systems that mimic the human brain’s structure as well as quantum or another approach can be found in Microsoft’s Cortana Intelligent suit.
Potential applications range from face detection, AI machine learning and reasoning, Natural language processing, predictive maintenance, to risk detection to Diagnostics or forecasting future sales (up to 90% correct).
The impossibility to maintain current knowledge is could be addressed by IBM’s Watson.
Knowledge degrades so fast that Hi-Tecemployer as GooglespaceX etc are focusing less on qualification but on logic thinking, problem solving and creative thinking.
AI is not programming computers but training them.
What is a cognitive chip? The SyNAPSE chip, introduced in 2014, operates at very low power levels. IBM built a new chip with a brain-inspired computer architecture powered by 1 million neurons and 256 million synapses chip. It is the largest chip IBM has ever built at 5.4 billion transistors, and has an on-chip network of 4,096 neurosynaptic cores. It consumes 70mW during real-time operation, orders of magnitude less energy than traditional chips.
The TrueNorth Chip or the SpiNNaker chip of the Univ. of Manchester is comparable endeavors.
Below are some characteristics of cognitive systems aim to fulfil:
Adaptive
Interactive
Iterative and helpful.
Contextual
They may understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gesture, auditory, or sensor-provided).
Neurocomputing, often referred to as artificial neural networks (ANN), can be defined as information processing systems (computing devices) designed with inspiration taken from the nervous system, more specifically the brain, and with particular emphasis in problem solving.
“An artificial neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experiential knowledge and making it available for use.” [Haykin S., Neural Networks and Learning Machines, 1999.]
The first neural networks were already presented in 1964, attempting to mimic the logic process of the brain. Brains are good at performing functions like pattern recognition, perception, flexible inference, intuition, and guessing, but also slow, imprecise, make erroneous generalizations, prejudiced, and are sometimes incapable of explaining their own actions. Cognitive Computing is progressing impressivly. Deep learning, pattern recognition, matching photos (97,5 %) or language translation may be found everywhere in five years.
Four areas expect to benefit especially:
Nanotechnology ( Biotechnology)
AI
Genetics
Robotics
3.3 A short Introduction to the Quantum World
Quantum physics is with us in our everyday life. No transistor would work without it.
Erwin Schrödinger, who developed quantum theory's defining equation, once warned a lecture audience that what he was about to say might be considered insane.
The famous double slit experiment should serve as an introductory first step into this world. Will discuss these two phenomena to give a first clue to the world of quantum physics:
Superposition
Entanglement
From a physical point of view, entangled particles form only one entity (one single waveform instead of two) and locality of a particle is an illusion. These particles have a probability of presence that stretches out infinitely, with their local position of very high probability of presence as "particles". Entangling means merging different waveforms into a single one, but which has several local positions of very high probability instead of one, like having a single particle (one single waveform), but with several centres of mass instead of one.
"Observing" one of the high-probability locations of entangled particles modifies this single probability cloud, which also determines the "state" of the second high-probability location of the other entangled particles).
Entanglement and Superposition cause qubits to behave very differently from bits. A two-bit circuit in a conventional computer can be in only one of four possible states (0 and 0, 0 and 1, 1 and 0, or 1 and 1), a pair of qubits can be in combination of all four. As the number of qubits in the circuit increases, the number of possible states, and thus the amount of information contained in the system increases exponentially.
Many various approaches are currently under development. Researchers favor currently qubit design, based on superconductors microchip-scale circuits made of materials that lose all electrical resistance at very low temperatures. Thanks to the Josephson effect, electric currents flowing around tiny loops in such circuits can circle both clockwise and counter clockwise at once, so they are perfect for representing a qubit. Within few years R&D efforts have increased Qubit lifetimes by a factor of 10,000, that is maintaining their state for around 50 - 100 μsecs, and reducing the error rate. [Martinis]
3.4 The Quantum Computer (QC)
The idea of QC is to store values of 2N complex amplitudes describing the wavefunction of N two-level systems (qubits) complex amplitudes and process this information by applying unitary in formations (quantum gates), that change these amplitudes in a precise and controlled manner.
Building the first real QC is an estimated to be a 10 B$ project. What could be the “killer applications” justifying this effort?
Scientists spent already several years looking for an answer, an application for quantum computing that would justify the development costs. The two classic examples, code-cracking and searching databases, seem not to be sufficient. QCs may search databases faster, but they are still limited by the time it takes to feed the data into the circuit, which would not change.
A much more promising application for the near future could be modelling of electrons in materials and molecules, something too difficult even for today's supercomputers. With around 400 encoded qubits, it might be possible to analyse ways to improve industrial nitrogen fixation, the energy-intensive process that turning unreactive molecules in air into fertilizer. This is now carried out on an industrial scale using the 120 years old Haber process, that uses up to about 5% of the natural gas produced worldwide. A quantum computer could help to design a much more energy-efficient catalyst. Another “killer application” might be searching for new high-temperature superconductors, or improving the catalysts used to capture carbon from the air or from industrial exhaust streams. “Progress there could easily substantiate the 10 billion.” [Troyer].
Which will be potential QC areas?
Design of drugs
Supply chain logistics
Material science (properties, as melting point etc. design of new metals)
Financial services
Cryptanalysis
However, veterans of the field caution that quantum computing is still in the early stages. The QC will rather appear as coprocessor than as stand alone computer. The development is in a phase that compare to Zuse in 1938. In 5 years special application superior to today’s computers with TP access may appear. [R. Blatt]
3.5 IoT
Market potential estimations range wide between: Cisco 20 -50 billion or IBM 20 bio devices.
Optimists have reason to be encouraged. More than 120 new devices connect to the Internet every second. McKinsey Global Institute estimates IoT could have an annual economic impact of $3,9 trillion to $11,1 trillion by 2025.
However, several short term obstacles to be fixed:
Missing Standards
Speed requirements to be resolves by transition from 4G to 5G (license auction 2017/18)
Address space (transition from IP4 to IP6 on its way)
The growth of the IoT, combined with the exponential development of sensors and connectivity, will make it more challenging to provide power to untethered devices and sending nodes. Even with long-life battery technology, many of these devices can only function for a few months without a recharge.
The arrival of the quest for an electric car has is additionally emphasizing the problem, but the 800 km reach may not come before 2020.
Energy harvesting increasing performance of energy transducers and the decreasing power requirements of ICs may bridge the gap. [A. Romani et alii, Nanopower Integrated Electronics for Energy harvesting, conversion and management, Univ. of Bologna, Italy]
Both consumers and the media are fascinated by IoT innovations that have already hit the market. With short time, some IoT devices have become standard, including thermostats that automatically adjust the temperature and production-line sensors that inform workshop supervisors of machine condition. Now innovators target more sophisticated IoT technologies as self-driving cars, drone-delivery services, and other applications as:
Large structures (bridges, buildings roads)
Adv. personal sensors ( breath analysis)
Logistics
Crop monitoring
Pollution
Tracking from kids to dogs and shoes etc.
Up to now the adoption of IoT is proceeding more slowly than expected, but semiconductor companies through new technologies and business models will try to accelerate growth.
3.6 Fiber
Replacing copper by optical connections within and outside the computer, increasing connectivity and the exponential growth of information will put further emphasize the development of data transmission.
The longer the light travels, the more photons will scatter off atoms and leak into the surrounding layers of cladding and protective coating. After 50 km, about 90% of the light will be lost. To keep the signal going after the first 50 km, repeaters were then used to convert light pulses into electronic signals, clean and amplify them, and then retransmit them.
The British physicist D. Payne opened a new avenue, by adding and exciting erbium atoms with a laser; he could amplify incoming light with a wavelength of 1.55 μm, where optical fibers are most transparent. The erbium-fiber amplifier enabled another way to boost data rates: multiple-wavelength communication. Erbium atoms amplify light across a range of wavelengths, a band wide enough for multiple signals in the same fiber, each with its own much narrower band of wavelengths.
The classical way to pack more bits per second is to shorten the length of pulses. Unfortunately, the shorter the pulses, the more vulnerable they become to dispersion and will stretch out traveling through a fiber and interfere with one another. Techniques previously developed, dubbed wavelength-division multiplexing, along with further improvements in the switching frequency of fast laser signals, led to an explosion in capacity.
Together, quadrature coding and coherent detection, with the ability to transmit in two different polarizations of light, have carried optical fibers to allowing a single optical channel to carry 100Gb/s over long distances, in fibers designed to carry only 10 Gb/s. Since a typical fiber can accommodate roughly 100 channels, the total capacity of the fiber can approach 10 Tb/ s.
The plethora some of these developments rising on the horizon stretch from
Photovoltaic with 500% of today’s capacity and 1/100 thickness
Water-purification
Medicine:
Some facts and figures:
14000 illnesses and 5000 publications/week.
Improved scanning can achieve 80%-90% hit rate in mammography.
“Chirurgical Intelligent Knife” (distinguishing malign from non malign areas)
Improved diagnostics: “The first time right” envisions future medicine to be predictive, personalized and precise [Zerhouni E., US NIH, 2006]
Intelligent prosthetics
3D printing of drugs or simple organs
Lab on a chip (biofluid evaluate plasma and bio markers)
Anti-bacteria nanoparticles.
We mentioned the information avalanche. Watson, as covered previously is a possible answer and avenue to be followed, as learning machine, without internet connection, trained to understand meaning of words, “Super Google “may answer questions before you realize you have it”.
We have been perusing the scenario in view of approaching the red brick wall and beyond we found a plethora of developments and emerging technologies and applications.
In spite of longstanding doomsday prophesies since the ´80s, thanks to the ingenuity of physicists and engineers, we will reach the red wall only within the next five to ten years. There may be a second wall an economic wall due to unjustifiable investments reached earlier. Many believe a third wall may be a “power” wall, not just for the well-known power problems but for the proportionality of system failure and power consumption.
Future technologies ranging from STT to graphite and future computing from “The Machine” to Quantum Computers, Cognitive Computing, Neurocomputing , AI and Watson.
The effects of “getting physical” by direct connecting computers increasingly to the physical world around, integrating all types of devices from keys to sensors, tags etc.
The evolution of human knowledge has been accelerated by storing and sharing information. The amount of data will reach 44 ZB in 2020, the number of Internet-connected devices 50 B in 2020 (doubling every two years). Handling these is by far exceeding the human capabilities, so we have to entrust them to automated systems, thus raising questions about the future ethical, security or privacy concerns.
The social impact of Industry 4.0 is estimated to create two million jobs, although it might destroy seven million jobs (in Germany) and by 2030 will affect up to 50% jobs as well as job and skill requirements worldwide.
The trend of shifting emphasis to applications and designing for people (not enterprises)
The impact of these developments and its lateral ramifications on the business environment, on future products and their investment priorities cannot be underestimated. Politics is evaluating perusing ideas as basic income or machine tax to avoid loss of elections by angry voters. These developments will not only model the future scientific scenario but even more the future economic development, education requirements, social evolution and thus our lives.
The outlook is exciting; the rate of progress will continue to provide better tools that will enable the development of even exponential better tools.
Banine V.Y., EUV Lithography Today and tomorrow ASML Inc.
Blatt R., Univ. of Innsbruck, Austria.
Deutsch D.,(2014) Deeper Than Quantum Mechanics, Univ. of Oxford.
Haykin S., (1999) Neural Networks and Learning Machines,
HP Silicon Design Labs in Fort Collins, Colo. Enterprise.
HPE 2017Atlantic’s: Return to Deep Space conference in Washington, D.C..
HPlabs April, 2010.
IBM, (2014). SYNAPSE chip,
ITRS, (2016), Final ITRS report.
Kim Kinam and U-In Chung, Technology Innovation, Samsung advanced Institute of Technology, Giheung, S. Korea, and A. Güclü, P.Potasz and P. Hawrylak, NRC of Ottawa, Canada.
Kurzweil R., (2014). The singularity is near.
Ledentsov N. et alii, VI Systems S. Burger and F Schmidt, Zuse Institute Berlin, New generation of vertical cavity surface emitting lasers for optical interconnects.
Loesch, C. (2015). ICT Trends and Scenarios, IDIMT 2015, Trauner Verlag Universität, Linz, Doucek P., Chroust G., Oskrdal V. (Editors).
Martinis J., (2014). Design of a Superconducting Quantum Computer" Google TechTalks,
Meyer G., IBM Zurich Research Lab.
Nokia (Bell Labs) and TU Munich
Payne D., Imperial College London U.K.
Romani A. et alii, Nanopower Integrated Electronics for Energy harvesting, conversion and management, Univ. of Bologna, Italy.
Stange D., Jülich, Geiger R., Villingen et alii. PSI, Univ. Grenoble, Z. Ikonic Univ. Leeds UK.
Sungwoo Hwang et alii,, Graphene and Atom-thick 2D Materials, Samsung advanced institute of technology Suwon, S. Korea.
Troyer.M., Institute for Theoretical Physics, ETH Zurich.
Wikipedia (2017), english version.
Zerhouni E.. The first time right, US NIH.
Passing through a period of paradigm change, it is advisable to take stock and see where we stand and peruse state and outlook of ICT and the options available regarding hardware, software and the interdependent societal development. We shall examine how some of these developments are affecting both the technological, economic and societal scenario and look at the reactions and preventive actions by the key players to meet the upcoming scenario.
1.1 Key players in 2015
Revenue and Net Income 2014/15
Annual Reports
The prevailing phenomenon seems to be the paradigm change from the previous concentration on the core business to diversification and buying all missing expertise, accompanied by actions to meet future profit and dividend exposures.
We will discuss how the key players are meeting these challenges and their results.
1.2 Is the success story of mobile repeating?
Smartphones and tablets are getting faster and more capable
Smartphones and tablets have taken over many of the PC tasks
The symbiosis of smartphones and AI is showing the future direction
The demand on communications exploding
2.1 The state of the industry
PwC Global 100 Study
2.2 Operating systems market share (PC)
NetMarketShare, Dec 2015
Windows 10 usage has doubled in six months, reflecting marketing efforts of Microsoft.
The lifespan of PCs are going to lasting a long time as SSDs replace hard drives.
New releases of Windows are not driving sales anymore
The coming back of powerful mainframes/ servers
Applications will increasingly need to be concurrent to exploit exponential CPU gains.
Windows and Android dominating (exp. splitting)
Games 32%
Entertainment 8%together 60%
Facebook 18%
News 2%
Source: FLURRY analytics, Comscore, NetMarketshare
The fact that nearly 60% are spent on games, Facebook, entertainment and news may cause the rise of some eyebrows.
4.1 Semiconductor and semiconductor equipment revenues
It is indicative to evaluate not only semiconductor industry but as well especially the semiconductor equipment industry, which is watching the ICT developments based on an intrinsic knowledge of technological developments is an excellent indicator.
4.2 Technology
Many technological developments as shown below have enabled and enable the extension of Moore’s law. Increased complexity must be offset by improved density, making the heat problem becoming again a central issue as Moore asked decennia ago „will it be possible to remove the heat“
To enable more function, power reduction is critical and will cause the shift of the R&D focus from speed to power, further on, to the quest the quest for cost reduction.
Approaching the end of the Moore laws final phase we encounter the shift that future developments will depend more and more on financial rather than on technological aspects. The question is whether we can afford to continue. According to Intel, the maximal extension of the law, in which transistor densities continue doubling every 18-24 months, will be reached in 2020 or 2022, around 7nm or 5nm.
Technologies in the pipeline are continuing to improve; a foreseen 30-fold advance in the future years is still significant. Nevertheless, the old way a perpetually improving technology is gone. Nobody thinks graphene, III-V semiconductors, or carbon nanotubes are going to bring it back. Further gains will be incremental, with performance edging up perhaps thirty fold in the future years. DARPA has investigated 30+ alternatives to CMOS, but only 2 or 3 of them show long-term potentia. The decline of Moore strengthens the emphasis on high performance computing and developments like cloud computing and AI. (Google bought Deep mining not just for AlphaGo).
The scenario for storage is different (Source: ISSCC 2016)
NAND flash memory continues to advance to higher density and lower power with still scaling down 2D technology. Recently 3b/cell NAND with up to 768 Gb have been reported extending the number of layers from 32 to 48.
Let us look at some further potentially influential developments.
4.3 Supercomputing
We can observe astounding progress and strategic competition in the field of Supercomputing. Chinas latest supercomputer, a monolithic system with 10,65 million cores is built entirely with Chinese microprocessors. No U.S.-made system comes close to that performance of China's new system, the Sunway TaihuLight with a theoretical peak performance of 124,5 petaflops.
4.4 KiloCore
Researchers at the University of California, have created a new processor with 1000 CPU cores. The “KiloCore” processors can be independently clocked to a maximum of 1,78 GHz, and shut down independently when not used. The 1000 processors can execute 115 billion instructions per second while dissipating only 0,7 watts, making the KiloCore 100 times more power-efficient than a laptop despite being built on old 32nm CMOS processor technology of IBM. By contrast, current Intel chips are much higher-clocked and built using a 14nm process, which achieve millions, not billions, of instructions per second.
What would be the use a chip with 1,000 cores? The same way as any other modern multi-core chip, i.e. video processing, encryption and decryption, and scientific tasks.
4.5 IoT
NZZ estimates the no. of connected units to increase from 2015 to 2020 by 200% –4500%.
However, it is not an easy and clear win, as the width of the forecasts show. A survey by Gartner shows that 39 % of companies do not intend to implement IoT in the next future and 9% have no intention to implement IoT at all. There are significant variations between different industries. Leading are asset-intensive industries as gas, oil and utilities as well as manufacturing, while less asset intensive industries and service industries as insurance or media show less interest. Gartner estimates that until the end of next year 56% of the first group will have projects implemented, where as light industries group will be in the range of a third only.
Crucial for the implementation of IoT is the solution of the energy supply. There are many ideas around but a newly emerging idea is energy from TV emitters, according to Kurzweil they could deliver 10-100s μW.
The widening of the thrust from vertical to lateral developments will bring additional sensors and new features as for vibration, drift, pressure, ultrasonic transducers, highly precise temperature measurement using the spectral information of radiation (change of resonance frequency with temperature of MEMS), displays as LCO panels and touch screens with thinner layers and higher sensitivity at lower cost.
4.6 Cognitive Computing (combining digital ‘neurons’ and ‘synapses’)
IBM research presented a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. This is moving beyond the von Neumann architecture that has been ruling computer architecture for more than half a century. Neurosynaptic computing chips try to recreate the phenomena between spiking neurons and synapses in biological systems, through advanced algorithms and silicon circuitry. The technology should yield orders of magnitude less power consumption and space than today’s computers. Its first two prototype chips are currently undergoing testing.
Cognitive computers will not be programmed the same way as traditional computers today, they are expected to learn through experience, find correlations, create hypotheses, and remember and learn from the outcomes, mimicking the brains structural and synaptic plasticity.
4.6.1 Neurosynaptic chips
The ambitious long-term goal is a chip system with ten billion neurons and hundred trillion synapses consuming merely one kW of power and occupying less than two liters of volume. While they contain no biological elements, these cognitive computing prototype chips use digital silicon circuits to make up a “Neurosynaptic core” with integrated memory (replicated synapses), computation (replicated neurons) and communication (replicated axons).
IBM has two working prototype designs. Both cores were fabricated in 45 nm SOICMOS and contain 256 neurons. One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses. The IBM team has successfully demonstrated simple applications like navigation, machine vision, pattern recognition, associative memory and classification.
4.7 Hewlett-Packard's futuristic 'Machine'
HP claims that a prototype of the futuristic „Machine“ computer should be ready for partners to develop software on by next year, though the finished product is still half a decade away. HP is placing a huge bet on a new type of computer that stores all data in vast pools of non-volatile memory. HP says the Machine will be superior to any computer today and claims system the size of a refrigerator will be able to do the work of a whole data center. The single-rack prototype will have 2,500 CPU cores and an impressive 320TB of main memory; this is more than 20 times the amount of any server on the market today.
4.8 AI (artificial intelligence or cognitive computing)
4.8.1 AI and the Future of Business
The symbiosis of microelectronic, sensoric and AI is enabling a leap in the development of robotics.
After many false dawns, AI has made extraordinary progress in the past few years, thanks to the technique of “deep learning”. Given enough data, large (or “deep”) neural networks, modeled on the brain’s architecture, can be trained to do a range of things from search engine, to automatic photo tagging, voice assistant, shopping recommendations or Tesla’s self-driving cars. However, this progress has also led to concerns about safety and job losses. Many wonder whether AI could get out of control, precipitating a conflict between people and machines. Some worry that AI will cause widespread unemployment, by automating cognitive tasks previously be done by people. After 200 years, the machinery question is back and needs to be answered. John Stuart Mill wrote in the 1840s “there cannot be a more legitimate object of the legislator’s care” than looking after those whose livelihoods are disrupted by technology. That was true in the era of the steam engine, and it remains true in the era of artificial intelligence.
Google’s concept of the 'device' to fade away, will lead the computer, whatever its form factor, to be an intelligent assistant helping through the day. We will move from mobile-first to an AI-first world your phone should proactively bring up the right documents, schedule and map your meetings etc. Google by its investments in AI is preparing itself for such a world, aiming to be there, offering "assistance" to their users so they do not have to type anything into a device.
Moore’s Law gets all the attention, but it is the combination of fast electronics and fast fiber-optic communications, that has created “the magic of the network“ of today.
Since 1980, the number of bits per second sent in an optical fiber has increased 10 million fold. That is remarkable even by the standards of 20th century electronics. It is more than the jump in the number of transistors on chips during that same period, as described by Moore’s law. Electronics has enormous challenges to keep Moore’s Law alive; fiber optics is also struggling to sustain its momentum. The past few decades, a series of new developments and break-throughs have allowed communications engineers to push more and more bits down fiber-optic networks. However, the easy gains are history. After decades of exponential growth, fiber-optic capacity may face a plateau.
5.1 The Fiber Optic Exponential
Data source: Keck INTEL
Fiber-optic capacity has made exponential gains over the years. The data in this chart show the improvement in fiber capacity by the introduction of wavelength-division multiplexing.
The heart of today’s fiber-optic connections is the core: a 9-micrometer-wide strand of glass that is almost perfectly transparent to 1.55-μm, infrared light surrounded by more than 50 μm of cladding glass with a lower refractive index. Laser signals are trapped inside by the cladding and guided along by internal reflection a rate of about 200000 km/s. The fiber is almost perfectly clear, but every now and then, a photon will bounce off an atom inside the core. The longer the light travels, the more photons will scatter off atoms and leak into the surrounding layers. After 50 km, about 90% of the light will be lost, mostly due to this scattering. To keep the signal going, repeaters were used to convert light pulses into electronic signals, clean and amplify them, and then retransmit them down the next length of fiber.
The physicist D. Payne opened a new avenue. By adding and exciting erbium atoms with a laser, he could amplify incoming light with a wavelength of 1.55 μm, where optical fibers are most transparent. Today, chains of erbium-fiber amplifiers extend fiber connections across continents or oceans.
The erbium-fiber amplifier enabled another way to boost data rates: multiple-wavelength communication. Erbium atoms amplify light across a range of wavelengths, a band wide enough for multiple signals in the same fiber, each with its own much narrower band of wavelengths.
This multi-wavelength approach, dubbed wavelength-division multiplexing, along with further improvements in the switching frequency of fast laser signals, led to an explosion in capacity. The classical way to pack more bits/s is to shorten the length of pulses or lack of pulse. Unfortunately, the shorter the pulses, the more vulnerable they become to dispersion, and will stretch out traveling through a fiber and interfere with one another. Fortunately, scientists had two techniques previously used to squeeze more wireless and radio signals into narrow slices of the radio spectrum.
Together, quadrature coding and coherent detection, with the ability to transmit in two different polarizations of light, have carried optical fibers to allowing a single optical channel to carry 100Gb/s over long distances, in fibers designed to carry only 10 Gb/s. Since a typical fiber can accommodate roughly 100 channels, the total capacity of the fiber can approach 10 Tb/ s.
Global Internet traffic increased fivefold from 2010 to 2015. The trend is likely to continue with the growth of streaming video and the Internet of Things.
During the past years in several IDIMT´s we have been discussing the coming potential of 3D printing. Now let us look at results that surpassed expectations. This does not include the potential by extending its use to organic or other new fields of applications.
Morris Technologies had been experimenting with metal sintering and super alloys for several years. In 2011, the firm zeroed in on the fuel nozzle as the part most appropriate for a makeover.
The result is a monolithic piece, which has replicated the complex interior passageways and chambers of the nozzle down to every twist and turn. Direct metal laser melting where alloy powder is sprayed onto a platform in a printer and then heated by a laser, and repeated 3,000 times until the part is formed converting a many-steps engineering and manufacturing process into just one.
Image: GE
Before 3D printing and modeling, this fuel nozzle had 20 different pieces. Now, just one part, the nozzle is 25% lighter and five times more durable all of which translates to a savings of around US $3 million per aircraft, per year for any airline flying a plane equipped with GE's LEAP engine. Finally, it took three to five months to produce; now it is about a week.
There are some things that machines are simply better at doing than humans are, but humans still have plenty going for them.
Machine learning, AI, task automation and robotics are already widely used. These technologies are about to multiply, and companies study how they can best take advantage of them.
Google’s CEO Pichai believes that devices will completely vanish, to be replaced by omnipresent AI. "Looking to the future, the next big step will be the concept of the 'device' to fade away,"
Most of us know and use positive effects of ICT ranging from social networking to participating in a wider, even worldwide, society, increasing opportunities for education, real-time information sharing and free promulgation of ideas, a development enjoying unparalleled acceptance.
7.1 Personal Impact of ICT
In spite of the plethora of positive effects of the use of ICT and networks, there is also another side of these developments. As Marcus Aurelius wrote: “The brain takes in the long run the color of the thoughts”. We should not only enjoy the benefits but also monitor the negative effects of social networking, ranging from neuro-physical effects to loss of privacy.
Social networks (SNW) pretend to an individual to have thousands of “friends.” However, these supposed “friends” are no more than strangers. SNW became the market place (“Bassena”) of today and watching the mobile phone a substitute for searching rewards. Research has also proven deteriorating influence on:
Storage capabilities in the working memory
Capability of multitasking
Judgments of order of magnitude (Columbia disaster, financial products, mm/inch)
Differentiation between important and unimportant information
This is not just speculation; it can be measured and is related to the volume of the amygdale or the size of the prefrontal cortex as it relates to the size of social group. Is there Digital Dementia on the horizon?
Healthy men have a sound warning; you may call it feeling for saturation. Men as “Informavoris rex” as carnivores successor, shows a kind of digital Darwinism based on the belief that the best informed survives and SNW brings advantages. Because many think that, the exchange of information brings additional value and it facilitates their participation and social acceptance. Being afraid to miss something, compulsion to consume and swallow every information, leads without any inhibitors or a saturation point to the loss of the capability to distinguish between import and unimportant information (Paris Hilton, Boris Becker) and thus independent thinking. We do not even apply the rational animals apply. Animals do not use more energy than prey will bring (Lions are not hunting mice, but buffaloes), but we are hunting information without evaluation and we do not know what is hiding behind information. We seem to follow an “All you can eat to all you can read” trend.
Network structure is not accidental; it follows “laws”. Search machines are rank high if a page is read/consumed by many people and creates much traffic (is this comparable to the idea that a species is important if it eats many different things and is eaten by many different species.). The number of links not content gives importance, not quality but number. Google page ranking has implications called Mathew effect (MT.25, 29).
In addition, the selection of content is shifting from established journalists, newspapers, and TV-and radio stations to uncontrolled secret search algorithms and private companies.
With ICT, many new legal issues arise ranging from copyright to personal privacy. Major technological evolutions triggered an adequate legal framework, as the industrial revolution led to labor law, motorization to traffic law, and the Digital Revolution to...? (see a special session).
7.2 Privacy
IC-Technology should be used to create social mobility, productivity and improve the lives of citizens. However, it added also new dimensions of surveillance.
In the wake of the Snowden revelations, the question was repeatedly asked: Why would governments wiretap its population? One of the answers may be: it is very cheap. Many people have compared today’s mass spying to the surveillance of East Germany’s Stasi. An important difference is dimension. Stasi employed one snitch for every 50 or 60 people it watched. Today a million-ish person workforce could keep six or seven billion people under surveillance, a ratio approaching 1:10,000. Thus, ICT has been responsible for the two to three order of magnitude “productivity gains” in surveillance efficiency.
Many companies try to profit from diminishing privacy and lure people into giving away their privacy for short-term financial benefits as price difference in medical cost or insurance, discounts for disclosure as of personal health data, living or driving habits.
7.3 ICT and Society