139,99 €
Since its commercialization in 1971, the microprocessor, a modern and integrated form of the central processing unit, has continuously broken records in terms of its integrated functions, computing power, low costs and energy saving status. Today, it is present in almost all electronic devices. Sound knowledge of its internal mechanisms and programming is essential for electronics and computer engineers to understand and master computer operations and advanced programming concepts. This book in five volumes focuses more particularly on the first two generations of microprocessors, those that handle 4- and 8- bit integers. Microprocessor 1 the first of five volumes presents the computation function, recalls the memory function and clarifies the concepts of computational models and architecture. A comprehensive approach is used, with examples drawn from current and past technologies that illustrate theoretical concepts, making them accessible.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 259
Veröffentlichungsjahr: 2020
Cover
Title page
Copyright
Quotation
Preface
Introduction
1 The Function of Computation
1.1. Beginnings
1.2. Classes of computers
1.3. Analog approach
1.4. Hardware-software relationship
1.5. Integration and its limits
1.6. Conclusion
2 The Function of Memory
2.1. Definition
2.2. Related concepts
2.3. Modeling
2.4. Classification
2.5. Conclusion
3 Computation Model and Architecture: Illustration with the von Neumann Approach
3.1. Basic concepts
3.2. The original von Neumann machine
3.3. Modern von Neumann machines
3.4. Variations on a theme
3.5. Instruction set architecture
3.6. Basic definitions for this book
3.7. Conclusion
Conclusion of Volume 1
Exercises
Acronyms
References
Index
End User License Agreement
Chapter 1
Figure 1.1. Ishango’s incised bones (source: unknown). For a color version of th...
Figure 1.2. A quipu (source: unknown). For a color version of this figure, see w...
Figure 1.3. Roman abacus (a) between the 2nd and 5th Centuries (© Inria/AMISA/Ph...
Figure 1.4. An example of a Pascaline at the Musée des Arts et Métiers (source: ...
Figure 1.5. Replica of the first Babbage difference machine
3
. For a color versio...
Figure 1.6. Babbage’s analytical machine (© Science Museum/Science & Society Pic...
Figure 1.7. One of the plans for Babbage’s analytical machine (© Science Museum/...
Figure 1.8. Falcon’s loom. For a color version of this figure, see www.iste.co.u...
Figure 1.9. Statistical machine (Hollerith 1887)
Figure 1.10. Evolution of concepts and technologies in the development of calcul...
Figure 1.11. A modern electromechanical relay, its equivalent electrical diagram...
Figure 1.12. An RCA 5965 type electronic tube and an IBM 701 electronic board (s...
Figure 1.13. A transistor and an electronic transistor board with seven inverter...
Figure 1.14. One of the 15 DIP integrated circuit CPU boards from a DEC PDP-11/2...
Figure 1.15. Evolution of computing performance over time (from (Bell 2008b))
Figure 1.16. PC motherboard (5150) from IBM (1981). For a color version of this ...
Figure 1.17. Axes of evolution over time of the price of classes (from (Bell 200...
Figure 1.18. The iconic Cray-1 supercomputer referred to as the “World’s most ex...
Figure 1.19. Evolution over time of supercomputer performance (according to Succ...
Figure 1.20. IBM System/360 mainframe computer
Figure 1.21. The IBM Application System (AS/400) family of minicomputers
Figure 1.22. Evolution over time of the prices of minicomputers (in thousands of...
Figure 1.23. An Octane graphics workstation from Silicon Graphics, Inc. (SGI). F...
Figure 1.24. The first microcomputers: the Micral N from R2E and the ALTAIR 8800...
Figure 1.25. Increasingly blurry boundaries. For a color version of this figure,...
Figure 1.26. Categories of computers (according to Bell (2008a))
Figure 1.27. The client–server model. For a color version of this figure, see ww...
Figure 1.28. A blade server. For a color version of this figure, see www.iste.co...
Figure 1.29. Example of a Beowulf server architecture. For a color version of th...
Figure 1.30. The Antikythera mechanism (left) and a reconstruction (right), by M...
Figure 1.31. The PACE 231R-V analog computer system from EAI (EAI 1964)
Figure 1.32. Layered view of software infrastructure
Figure 1.33. Historical timeline of the evolution of concepts for the families o...
Figure 1.34. Need for computing for multimedia applications (based on 2003 ITRS ...
Figure 1.35. Evolution of computer roles (from Nelson and Bell (1986))
Figure 1.36. Evolution over time of the number of transistors of an integrated c...
Figure 1.37. The fineness of etching of integrated circuits over the years (tech...
Figure 1.38. The energy wall (from Xanthopoulos 2009 on data from ISSCC). For a ...
Figure 1.39. Chip area achievable with progress in integration (according to (Ma...
Chapter 2
Figure 2.1. Vocabulary for binary formats
Figure 2.2. Memory access policies
Figure 2.3. Memory organization and addressing
Figure 2.4. Memory area
Figure 2.5. Memory hierarchy
Figure 2.6. Types of storage technologies in modern computers
Figure 2.7. Simplified classification of random access semiconductor memory
Figure 2.8. Detailed classification of permanent semiconductor memory
Chapter 3
Figure 3.1. Description of the computation of a factorial via dataflow graph
Figure 3.2. Positioning of the computation model in relation to the architecture
Figure 3.3. Multi-level architectural concepts
Figure 3.4. Computer design layers based on Blaauw and Brooks (1996)
Figure 3.5. Y-diagram (Gajski and Kuhn 1983)
Figure 3.6. Hierarchical structure of a computer
Figure 3.7. Different levels of abstraction of computer architecture based on Si...
Figure 3.8. Abstract and concrete hierarchical aspects of an architecture
Figure 3.9. The concept of computer architecture according to Sima et al. (1997)
Figure 3.10. Layered design of a computer
Figure 3.11. Positioning of architecture for four historic architectures (Corpor...
Figure 3.12. Architecture according to von Neumann (1945)
Figure 3.13. Von Neumann machine with its five functional units
Figure 3.14. Simplified functional organization of the IAS machine
Figure 3.15. Functional organization of a von Neumann machine
Figure 3.16. Functional organization of the IBM 701 (based on Frizzell (1953), m...
Figure 3.17. Infinite two-phase execution cycle
Figure 3.18. Modern view of a von Neumann computer
Figure 3.19. The three communications buses
Figure 3.20. The three functional units of a microprocessor
Figure 3.21. Internal circulation of information inside a microprocessor
Figure 3.22. Microarchitecture of bus-based microprocessors
Figure 3.23. Decoding of an instruction by a hardwired sequencer
Figure 3.24. Basic steps of the basic execution cycle
Figure 3.25. Execution cycle flowchart
Figure 3.26. Functional steps to execute an instruction
Figure 3.27. Execution cycle described with different forms of access
Figure 3.28. Information flow in a processor
Figure 3.29. Internal organization of a bus (control signals not shown)
Figure 3.30. Functional internal organization of Intel 8080A microprocessors wit...
Figure 3.31. Two variations of a double internal bus organization (CU and contro...
Figure 3.32. Internal functional organization of the Motorola MC6800 microproces...
Figure 3.33. Functional internal organization of the PACE microprocessor from NS...
Figure 3.34. Internal three-bus organization (CU and control signals not shown)
Figure 3.35. Pure Harvard architecture
Figure 3.36. Example of a modified Harvard architecture (x86 family)
Figure 3.37. Simplified architecture of a SPARC
®
family processor
Figure 3.38. The four basic approaches to ILP
Figure 3.39. Simplified classification of TLP architectures
Figure 3.40. Variation of characteristics over time (based on (Leavitt 2012))
Figure 3.41. Microphotograph of an Intel Sandy Bridge quad-core i7 (source: Inte...
Figure 3.42. Classes of instruction set architectures with examples
Figure 3.43. Zero-address stack architecture (from Nurmi (2007), modified)
Figure 3.44. One-address architecture, with accumulator (from (Nurmi 2007), modi...
Figure 3.45. Architecture with two (a) and three (b) register references (from N...
Figure 3.46. Memory-register architectures (from Nurmi (2007), modified
Figure 3.47. Three-address architecture (from Nurmi (2007), modified
Chapter 1
Table 1.1. Generations of calculating machines and computers based on component ...
Table 1.2. Reference computers for generations 1 and 2
Table 1.3. The main computers from this generation
Table 1.4. Primary computers in this generation
Table 1.5a. Classification of generations of integrated circuits according to va...
Table 1.5b. Classification of generations of integrated circuits according to va...
Table 1.5c. Classification of generations of integrated circuits according to va...
Table 1.6. Classification of generations of integrated circuits adopted
Table 1.7. Comparison of characteristics between computing resources (from Suri ...
Table 1.8a. Generations of computers and main features
Table 1.8b. Generations of computers and main features (continued)
Chapter 2
Table 2.1. Vocabulary describing a packet of bits (Darche 2012)
Table 2.2. New prefixes of measurement units for memory
Chapter 3
Table 3.1. Runtime models and computer categories (Treleaven and Lima 84, van de...
Table 3.2. Characteristics of the main models of computation (according to Sima ...
Table 3.3. Characteristics of the primary computation models (according to Sima ...
Table 3.4. Characteristics of the main computation models – continued (based on ...
Table 3.5. Characteristics of architecture classes (according to Hennessy and Pa...
Cover
Table of Contents
Title Page
Copyright
Quotation
Preface
Introduction
Begin Reading
Conclusion of Volume 1
Exercises
Acronyms
References
Index
End User License Agreement
v
iii
iv
vii
ix
x
xi
xiii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
49
50
51
52
53
54
55
56
57
58
59
60
61
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
193
194
195
196
197
198
199
200
201
202
203
Series Editor
Jean-Charles Pomerol
Philippe Darche
First published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
© ISTE Ltd 2020
The rights of Philippe Darche to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2020938715
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-78630-563-3
Every advantage has its disadvantages and vice versa.
Shadokian philosophy1
1
The Shadoks are the main characters from an experimental cartoon produced by the Research Office of the Office de Radiodiffusion-Télévision Française (ORTF). The twominute-long episodes of this daily cult series were broadcast on ORTF’s first channel (the only one at the time!) beginning in 1968. The birds were drawn simply and quickly using an experimental device called an
animograph
.The Shadoks are ridiculous, stupid and mean. Their intellectual capacities are completely unusual. For example, they are known for bouncing up and down, but it is not clear why! Their vocabulary consists of four words: GA, BU, ZO and MEU, which are also the four digits in their number system (base 4) and the musical notes in their four-tone scale. Their philosophy is comprised of famous mottos such as the one cited in this book.
Computer systems (hardware and software) are becoming increasingly complex, embedded and transparent. It therefore is becoming difficult to delve into basic concepts in order to fully understand how they work. In order to accomplish this, one approach is to take an interest in the history of the domain. A second way is to soak up technology by reading datasheets for electronic components and patents. Last but not least is reading research articles. I have tried to follow all three paths throughout the writing of this series of books, with the aim of explaining the hardware and software operations of the microprocessor, the modern and integrated form of the central unit.
This first work in a five-volume series deals with the general operating principles of the microprocessor. It focuses in particular on the first two generations of this programmable component, that is, those that handle integers in 4- and 8-bit formats. In adopting a historical angle of study, this deliberate decision allows us to return to its basic operation without the conceptual overload of current models. The more advanced concepts, such as the mechanisms of virtual memories and cache memory or the different forms of parallelism, will be detailed in the following volumes with the presentation of subsequent generations, that is, 16-, 32- and 64-bit systems.
The first volume addresses the field’s introductory concepts. As in music theory, we cannot understand the advent of the microprocessor without talking about the history of computers and technologies, which is presented in the first chapter. The second chapter deals with storage, the second function of the computer present in the microprocessor. The concepts of computational models and computer architecture will be the subject of the final chapter.
The second volume is devoted to aspects of communication in digital systems from the point of view of buses. Their main characteristics are presented, as well as their communication, access arbitration, and transaction protocols, their interfaces and their electrical characteristics. A classification is proposed and the main buses are described.
The third volume deals with the hardware aspects of the microprocessor. It first details the component’s external interface and then its internal organization. It then presents the various commercial generations and certain specific families such as the Digital Signal Processor (DSP) and the microcontroller. The volume ends with a presentation of the datasheet.
The fourth volume deals with the software aspects of this component. The main characteristics of the Instruction Set Architecture (ISA) of a generic component are detailed. We then study the two ways to alter the execution flow with both classic and interrupt function call mechanisms.
The final volume presents the hardware and software aspects of the development chain for a digital system as well as the architectures of the first microcomputers in the historical perspective.
This book gradually transitions from conceptual to physical implementation. Pedagogy was my main concern, without neglecting formal aspects. Reading can take place on several levels. Each reader will be presented with introductory information before being asked to understand more difficult topics. Knowledge, with a few exceptions, has been presented linearly and as comprehensively as possible. Concrete examples drawn from former and current technologies illustrate the theoretical concepts.
When necessary, exercises complete the learning process by examining certain mechanisms in more depth. Each volume ends with bibliographic references including research articles, works and patents at the origin of the concepts and more recent ones reflecting the state of the art. These references allow the reader to find additional and more theoretical information. There is also a list of acronyms used and an index covering the entire work.
This series of books on computer architecture is the fruit of over 30 years of travels in the electronic, microelectronic and computer worlds. I hope that it will provide you with sufficient knowledge, both practical and theoretical, to then specialize in one of these fields. I wish you a pleasant stroll through these different worlds.
IMPORTANT NOTES.– As this book presents an introduction to the field of microprocessors, references to components from all periods are cited, as well as references to computers from generations before this component appeared.
Original company names have been used, although some have merged. This will allow readers to find specification sheets and original documentation for the mentioned integrated circuits on the Internet and to study them in relation to this work.
The concepts presented are based on the concepts studied in selected earlier works (Darche 2000, 2002, 2003, 2004, 2012), which I recommend reading beforehand.
Philippe DARCHEJune 2020
In this book, we will focus on the microprocessor, the integrated form of the central unit. It introduces basic concepts from the perspective of sequential execution. This first volume, presenting the field’s introductory concepts, is organized into three chapters. The first two present the calculation and memory functions which, along with communication, are the computer’s three primary functions. The last chapter defines concepts concerning computational models and computer architectures.
As in music theory, we cannot discuss the microprocessor without positioning it in the context of the history of the computer, since this component is the integrated version of the central unit. Its internal mechanisms are the same as those of supercomputers, mainframe computers and minicomputers. Thanks to advances in microelectronics, additional functionality has been integrated with each generation in order to speed up internal operations. A computer1 is a hardware and software system responsible for the automatic processing of information, managed by a stored program. To accomplish this task, the computer’s essential function is the transformation of data using computation, but two other functions are also essential. Namely, these are storing and transferring information (i.e. communication). In some industrial fields, control is a fourth function. This chapter focuses on the requirements that led to the invention of tools and calculating machines to arrive at the modern version of the computer that we know today. The technological aspect is then addressed. Some chronological references are given. Then several classification criteria are proposed. The analog computer, which is then described, was an alternative to the digital version. Finally, the relationship between hardware and software and the evolution of integration and its limits are addressed.
NOTE.– This chapter does not attempt to replace a historical study. It gives only a few key dates and technical benchmarks to understand the technological evolution of the field.
Humans have needed to count since our earliest days (Ifrah 1994; Goldstein 1999). Fingers were undoubtedly used as the first natural counting tool, which later led to the use of the decimal number base. During archeological excavations, we have also found notched counting sticks, bones and pieces of wood. The incised bones of Ishango, dated between 23,000 and 25,000 years BC, provide an example (Figure 1.1).
Figure 1.1.Ishango’s incised bones. For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
(source: unknown)
Counting sticks were used during antiquity, as well as pebbles, hence the word calculus, from the Latin calculus which means “small pebble”. Knotted ropes were also used for counting, an example being the Incan quipu (Figure 1.2). This Incan technique (dating ≈ 1200–1570) used a positional numbering system (cf. § 1.2 of Darche (2000)) in base-10 (Ascher 1983).
Figure 1.2.A quipu. For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
(source: unknown)
The need for fast and precise computation necessitated the use of computing instruments. Two exemplars are the abacus and the slide rule. The abacus is a planar calculating instrument, with examples including the Roman (Figure 1.3(a)) and the Chinese (Figure 1.3(b)) abacus. The latter makes it possible to calculate the four basic arithmetic operations by the movements of beads (or balls) strung on rods, which represent numbers.
Figure 1.3.Roman abacus (a) between the 2nd and 5th Centuries (© Inria/AMISA/Photo J.-M. Ramès); Chinese abacus (b). For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
The 17th Century saw the introduction of mechanical computing machines, and the beginning of the history of computers is generally dated from their appearance. They met the need to systematically calculate tables of numbers reliably and quickly. These machines naturally used the decimal base. The most famous is undoubtedly the adding machine called the Pascaline (1642), named after its inventor, the philosopher and scientist Blaise Pascal (1623–1662). Numbers were entered using numbered wheels (Figure 1.4). The result was visible through the upper slits. Complementation made it possible to carry out subtraction (cf. exercise E1.1). But the first description of a four-operation machine was Wilhelm Chickard’s machine (1592–1635), which appeared in a letter from the inventor to Johannes Kepler in 1623 (Aspray 1990). The end of the 17th Century and the following one were fruitful in terms of adding machines. Consider, for example, machines by Morland (1666), Perrault (1675), Grillet (1678), Poleni (1709), de Lépine (1725), Leupold (1727), Pereire (1750), Hahn (1770), Mahon (1777) and Müller (1784). A logical continuation of this trend was the multiplying machine by Gottfried Wilhelm Leibniz (1646–1716), which was designed in 1673 but whose implementation was delayed because of the lack of mechanical manufacturing precision in the 17th Century. For more information on this technology, we can cite the richly illustrated book by Marguin (1994) introducing the first mechanical calculating machines.
Figure 1.4.An example of a Pascaline at the Musée des Arts et Métiers. For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
(source: David Monniaux/Wikipedia2)
The mathematician Charles Babbage (1791–1871) marked the 19th Century a posteriori with two machines: the Difference Engine and the Analytical Engine.
The first machine was intended for the automatic computation of polynomial functions with printed results in order to build trigonometric and logarithm tables for the preparation of astronomical tables useful for navigation. At the time, logarithm tables were expensive, cumbersome and often out of print (Campbell-Kelly 1987, 1988, Swade 2001). They were calculated by hand, a tedious method that was the source of many errors. We can cite as an example those of De Prony (1825) for assessment, which was studied among others by Grattan-Guinness (1990), of which Babbage was aware. This machine reportedly allowed the successive values of a polynomial function to be calculated by Newton’s finite difference method (see, for example, Bromley (1987) and Swade (1993)). Figure 1.5 presents a prototype, with all the details of this construction given in Swade (2005). It was never produced during his lifetime because of the enormous cost of manufacturing the mechanics. It was not until May 1991 that the second model, called the “difference machine no. 2”, was implemented at the London Science Museum where it was also exhibited (Swade 1993).
Figure 1.5.Replica of the first Babbage difference machine3. For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
The second machine (Figure 1.6) could compute the four basic arithmetic operations.
Figure 1.6.Babbage’s analytical machine (© Science Museum/Science & Society Picture Library). For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
It introduces the basic architecture of a computer and its programming (Hartree 1948). Indeed, as illustrated in Figure 1.7, it was composed of a mill, which played the role of the modern Central Processing Unit (CPU), and a store, which played the role of main storage. It also implemented the notion of registers (major axes) and data buses (transfer paths). Integers were internally represented in base-10 using Sign-Magnitude or Sign and Magnitude Representation (SMR, cf. § 5.2 in Darche (2000)) in base 10. Extensive details of its operation are given in Bromley (1982). For the same technological and financial reasons previously mentioned, its construction has never been completed.
Figure 1.7.One of the plans for Babbage’s analytical machine (© Science Museum/Science & Society Picture Library). For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
To program the machine, Babbage proposed the punched card. The latter had been invented by Basile Bouchon in 1725 for the weaving industry in the form of a strip of perforated paper. Jean-Baptiste Falcon improved it by transforming this strip into a string of punched cards linked together by cords. These cards made it possible to store a weaving pattern (Figure 1.8). This principle was further improved and made truly usable by Joseph Marie Jacquard with his famous loom (cf. Cass (2005) for a notice by J. M. Jacquard from 1809). Essinger (2004) tells the history of this machine. The latter was not the only programmable machine of the time. The music box with pegged cylinder was another form. In Babbage’s machine, program instructions and data were entered separately using two decks of cards. Babbage had a collaborator, Ada Lovelace, who is considered the first programmer in history to have written a Bernoulli number algorithm for this machine (reproduced in Kim and Toole (1999)). However, we should not conclude that Charles Babbage is the source of the modern computer because of the influence of his ideas on the design of modern computers (Metropolis and Worlton 1980).
Figure 1.8.Falcon’s loom. For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
The history of the modern computer can also be traced back to the 1880s with the invention of mechanography for the United States Census Bureau (Ceruzzi 2013). Hermann Hollerith took up the idea of the punched card and mechanized data processing to calculate statistics (Hollerith 1884a,b 1887). Figure 1.9 shows his statistics machine, composed of a hole punch called a press, with a tabulator that read and counted using electromechanical counters, and a sorter called a sorting box.
Figure 1.9.Statistical machine (Hollerith 1887)
As previously described and illustrated in Figure 1.10, the computer in its current form is the result of technological progress and advances in the mathematical fields, particularly in logic and arithmetic. Boole’s algebra offered a theoretical framework for the study of logic circuits (cf. § 1.3 of Darche (2002)). For example, the American researcher Claude Elwood Shannon illustrated the relationship between Boolean logic and switch and relay circuits in his famous article (Shannon 1938). Thus, a link was established between mathematical theory and manufacturing technology. A study by Shannon (1953) described the operation of 16 Boolean functions in two variables using 18 contacts, and was able to show that this number of contacts was minimal. The mathematical aspect of switching has been studied in particular by Hohn (1955). Technology played a major role because it had a direct impact on the feasibility of the implementation, the speed of computation, and the cost of the machine.
Figure 1.10.Evolution of concepts and technologies in the development of calculating machines (from Marguin (1994))
There are several possible ways to classify computers. One is primarily related to the hardware technology available at the time, as presented in Tanenbaum (2005). For this reason, we will speak of technological generations. The transition from one generation to the next is achieved by a change in technology or by a major advance. Table 1.1 presents these generations in a simplified manner.4
Table 1.1.Generations of calculating machines and computers based on component technologies
Technological generations
Dates
0 – mechanical
1642–1936
1 – electromechanical
1937–1945
2 – tube
1946–1955
3 – transistor
1956–1965
4 – integrated circuits SSI – MSI – LSI
1966–1980
5 – integrated circuit VLSI
1981–1999
6 – integrated circuit GSI – SoC – MEMS
2000 to present
Generation 0 (1642–1936) consisted of mechanical computers, as presented in the previous section. Mechanography appeared at the end of the 19th Century to respond in particular to the need for automatic processing of statistical data, initially for the census of the American population. Its technology naturally evolved towards electromechanics. A historical examination of mechanography in relation to “modern” computing was conducted by Rochain (2016).
Generation 1 was that of the electromechanical computer (1937–1945). The basic component was the electromechanical relay (Figure 1.11(a)) comprised of a coil that moves one or more electrical contacts on command (i.e. if it is electrically powered). Figure 1.11(b) presents its equivalent electric diagram. Keller (1962) describes the technology of the time. The implementation of a logical operator in this technology was described in § 2.1.2 of Darche (2004). In 1937, George Stibitz, a mathematician from Bell Labs, built the first binary circuit, an adding machine, the Model K (K for Kitchen) in electromechanical technology (Figure 1.11(c)). One of the pioneers of this generation in Europe was the German Konrad Zuse. His first machine, the Z1, begun in 1936 and completed two years later, was a mechanical computer powered by an electric motor. The first electromechanical relay computer, the Z2, was completed in 1939. It was built using surplus telephone relays. The Z3 (storage of 1,8005 relays, 600 for the computing unit and 200 for the control unit, according to Ceruzzi (2003)), whose construction began 1938 and ended in 1941, used base-2 floating-point number representation. The Z4, started in 1942, was completed in 1945. Rojas (1997) describes the architecture of the Z1 and the Z3, and Speiser (1980), that of the Z4. In the United States, Harvard’s Mark I, also called Automatic Sequence Controlled Calculator (ASCC) by IBM, was built by Howard Aiken between 1939 and 1944. Bell Laboratories built six models of computers using this technology between 1939 and 1950 for military and scientific use (Andrews 1982). Andrews and Bode (1950) describe the use of the Model V from Bell Laboratories. The calculation speed of these computers is estimated at 10 operations/s.
Figure 1.11.A modern electromechanical relay, its equivalent electrical diagram, and the Model K adder. For a color version of this figure, see www.iste.co.uk/darche/microprocessor1.zip
The subsequent generations used electronic components, beginning in the 1946–1955 period with the electronic tube, also known as the vacuum tube (thermionic valve). This component has rectification, amplification, and switching functions. It was the latter that was exploited in this case. As shown in Figure 1.12(a)
