Ultra Low Power Electronics and Adiabatic Solutions - Hervé Fanet - E-Book

Ultra Low Power Electronics and Adiabatic Solutions E-Book

Hervé Fanet

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The improvement of energy efficiency in electronics and computing systems is currently central to information and communication technology design; low-cost cooling, autonomous portable systems and functioning on recovered energy all need to be continuously improved to allow modern technology to compute more while consuming less. This book presents the basic principles of the origins and limits of heat dissipation in electronic systems.

Mechanisms of energy dissipation, the physical foundations for understanding CMOS components and sophisticated optimization techniques are explored in the first half of the book, before an introduction to reversible and quantum computing. Adiabatic computing and nano-relay technology are then explored as new solutions to achieving improvements in heat creation and energy consumption, particularly in renewed consideration of circuit architecture and component technology.

Concepts inspired by recent research into energy efficiency are brought together in this book, providing an introduction to new approaches and technologies which are required to keep pace with the rapid evolution of electronics.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 340

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Introduction

1. Dissipation Sources in Electronic Circuits

1.1. Brief description of logic types

1.2. Origins of heat dissipation in circuits

2. Thermodynamics and Information Theory

2.1. Recalling the basics: entropy and information

2.2. Presenting Landauer’s principle

2.3. Adiabaticity and reversibility

3. Transistor Models in CMOS Technology

3.1. Reminder on semiconductor properties

3.2. Long- and short-channel static models

3.3. Dynamic transistor models

4. Practical and Theoretical Limits of CMOS Technology

4.1. Speed–dissipation trade-off and limits of CMOS technology

4.2. Sub-threshold regimes

4.3. Practical and theoretical limits in CMOS technology

5. Very Low Consumption at System Level

5.1. The evolution of power management technologies

5.2. Sub-threshold integrated circuits

5.3. Near-threshold circuits

5.4. Chip interconnect and networks

6. Reversible Computing and Quantum Computing

6.1. The basis for reversible computing

6.2. A few elements for synthesizing a function

6.3. Reversible computing and quantum computing

7. Quasi-adiabatic CMOS Circuits

7.1. Adiabatic logic gates in CMOS

7.2. Calculation of dissipation in an adiabatic circuit

7.3. Energy-recovery supplies and their contribution to dissipation

7.4. Adiabatic arithmetic architecture

8. Micro-relay Based Technology

8.1. The physics of micro-relays

8.2. Calculation of dissipation in a micro-relay based circuit

Bibliography

Index

End User License Agreement

List of Illustrations

1. Dissipation Sources in Electronic Circuits

Figure 1.1.

Boolean functions with one variable

Figure 1.2.

Boolean functions with two variables

Figure 1.3.

Example of a three-variable function

Figure 1.4.

Boolean material architecture

Figure 1.5.

Counter, a non-combinational function

Figure 1.6.

Basic systems in sequential logic

Figure 1.7.

The functioning of the traffic lights model

Figure 1.8.

Logical diagram of traffic lights

Figure 1.9.

General diagram customized for the traffic lights model

Figure 1.10.

Sequential synchronous circuit

Figure 1.11.

Latches and flip-flops

Figure 1.12.

Pipelined and non-pipelined architecture types

Figure 1.13.

Sequential pipelined system

Figure 1.14.

Using switches to perform AND and OR functions

Figure 1.15.

The complete AND function

Figure 1.16.

Complementary logic

Figure 1.17.

NMOS and PMOS transistors

Figure 1.18.

NAND using CMOS technology

Figure 1.19.

CMOS circuit and output capacitor

Figure 1.20.

Simplified electric diagram of a CMOS gate

Figure 1.21.

AND gate in pass-transistor technology

Figure 1.22.

Differential pass-transistor logic

Figure 1.23.

Transmission gate

Figure 1.24.

Transmission gate functioning

Figure 1.25.

Exclusive OR in Pass-Gate logic

Figure 1.26.

NAND function in dynamic logic

Figure 1.27.

Dynamic logic gate

Figure 1.28.

A diagram that is not functional

Figure 1.29.

DOMINO logic

Figure 1.30.

Dissipation in a two-port device

Figure 1.31.

RC circuit and heat dissipation

2. Thermodynamics and Information Theory

Figure 2.1.

Microscopic states

Figure 2.2.

System interaction with a thermostat and Boltzmann’s distribution

Figure 2.3.

Irreversible gate

Figure 2.4.

Two-state system

Figure 2.5.

Binary register based on unique atoms or molecules

Figure 2.6.

A factor two compression

Figure 2.7.

The paradox of Maxwell’s demon

Figure 2.8.

Verification of Landauer’s principle

Figure 2.9.

Dissipation in a logically reversible transformation

Figure 2.10.

Interconnect capacitance and “scaling”

Figure 2.11.

Capacitor charge

Figure 2.12.

Optimal and quasi-optimal solutions in a constant capacitance charge

Figure 2.13.

Adiabatic charge of a capacitor when leakage is present

Figure 2.14.

A logic gate with an adiabatic command

Figure 2.15.

The Benett clocking principle

Figure 2.16.

Incomplete pipeline

Figure 2.17.

Operational adiabatic pipeline

Figure 2.18.

Quasi-adiabatic gate

Figure 2.19.

The reversible pipeline

3. Transistor Models in CMOS Technology

Figure 3.1.

Silicon-bands diagram

Figure 3.2.

Filling in the bands and the Fermi level

Figure 3.3.

Bands and the notion of holes

Figure 3.4.

Doped semiconductor

Figure 3.5.

Doped semiconductors

Figure 3.6.

Metal-oxide semiconductor structure

Figure 3.7.

Calculating the inversion charge

Figure 3.8.

The Lilienfield patents

Figure 3.9.

NMOS transistor functioning

Figure 3.10.

Transistor in CMOS technology

Figure 3.11.

Calculating the concentrations in a transistor

Figure 3.12.

Transistor saturation

Figure 3.13.

Characteristic curves of a channel n transistor

Figure 3.14.

Quasi-static transistor model

Figure 3.15.

Small signals transistor model

4. Practical and Theoretical Limits of CMOS Technology

Figure 4.1.

Integrated circuits

Figure 4.2.

Inverter model and layout

Figure 4.3.

Sectional view of the inverter

Figure 4.4.

Interconnect and scaling

Figure 4.5.

Characteristic function of dissipation in CMOS technology

Figure 4.6.

Current in weak inversion

Figure 4.7.

Example of a logic gate

Figure 4.8.

Example of logic gates in a global architecture

Figure 4.9.

Lambert function

Figure 4.10.

Sub-threshold inverter

Figure 4.11.

Estimating the variability of the threshold voltage as a function of the technological node

Figure 4.12.

Planar transistor on an SOI substrate and a FinFET transistor

Figure 4.13.

Theoretical model of a transistor

5. Very Low Consumption at System Level

Figure 5.1.

Parallelism and active power

Figure 5.2.

Parallelization in a data path

Figure 5.3.

Predicting and reducing consumption

Figure 5.4.

Transistor chain and sub-threshold current

Figure 5.5.

MTCMOS architecture

Figure 5.6.

Classic SRAM architecture

Figure 5.7.

Eight-transistor SRAM cell

Figure 5.8.

Constrained optimum

Figure 5.9.

Examples of relative sensitivity depending on the energy (from MAR [MAR 10])

Figure 5.10.

Connections in an integrated circuit

Figure 5.11.

Links between gates

Figure 5.12.

Interconnect with repeaters

Figure 5.13.

Dissipated power in an adapted or unadapted link

6. Reversible Computing and Quantum Computing

Figure 6.1.

Reversible and irreversible gates

Figure 6.2.

Constructing a reversible gate with a width of 2

Figure 6.3.

Control gate

Figure 6.4.

Cascading two control gates

Figure 6.5.

The conventions of reversible logic for a control gate

Figure 6.6.

Control inverter

Figure 6.7.

Toffoli gate

Figure 6.8.

Feynman gate

Figure 6.9.

Fredkin gate

Figure 6.10.

Duplicating a signal and a fan-out

Figure 6.11.

Sylow cascade

Figure 6.12.

The “twiin circuit”

Figure 6.13.

Synthesis of a reversible function

Figure 6.14.

The synthhesis steps

Figure 6.15.

The reversible copy

Figure 6.16.

Controlled inverter boarding an irreversible function

Figure 6.17.

Example of a majority gate

Figure 6.18.

Truth table of a reversible adder

Figure 6.19.

Synthesis of a reversible binary adder

Figure 6.20.

A 4-bit reversible adder

Figure 6.21.

Inverter controlled by a single control bit

Figure 6.22.

Inverter controlled by two inputs

Figure 6.23.

The signals in reversible adiabatic logic

Figure 6.24.

Adiabatic command of a reversible circuit

Figure 6.25.

Quantum adder

7. Quasi-adiabatic CMOS Circuits

Figure 7.1.

Dissipation in a logic gate

Figure 7.2.

Logic pipeline

Figure 7.3.

NAND CMOS gate

Figure 7.4.

Non-adiabatic case

Figure 7.5.

“Bennet clocking”-type architecture

Figure 7.6.

The adiabatic pipeline (example of an AND gate at the input)

Figure 7.7.

CMOS architecture’s incompatibility with the adiabatic principle

Figure 7.8.

ECRL buffer/inverter

Figure 7.9.

Generic ECRL gate

Figure 7.10.

PFAL inverter

Figure 7.11.

General PFAL

Figure 7.12.

The 2N-2N2P (left) inverter and the DCPAL (right) inverter

Figure 7.13.

Comparison of different logic families [BHA 11]

Figure 7.14.

Phase 2 in PFAL

Figure 7.15.

Phase 3 in PFAL

Figure 7.16.

Phase 4 in PFAL

Figure 7.17.

Phase 1 in PFAL for the following event

Figure 7.18.

Energy optimum in adiabatic logic

Figure 7.19.

Sub-threshold adiabatic gate

Figure 7.20.

Role of supplies in energy recovery

Figure 7.21.

Capacitor-based energy recovery supply

Figure 7.22.

Output voltage formation

Figure 7.23.

Optimal number of steps in a capacitor-based generator

Figure 7.24.

Different solutions for energy recovery supplies

Figure 7.25.

Inductive energy recovery supply

Figure 7.26.

2N2P-type generator

Figure 7.27.

Classic logic and adiabatic logic

Figure 7.28.

Four-bit adiabatic adder

Figure 7.29.

Complex exclusive OR gate with N inputs

8. Micro-relay Based Technology

Figure 8.1.

Micro-relay with a suspended membrane (according to [KAM 11])

Figure 8.2.

Characteristic curve of a micro-relay

Figure 8.3.

Dynamic model of a nano-relay

Figure 8.4.

Movement of the mobile structure according to the time [LEU 08]

Figure 8.5.

A device in the plane

Figure 8.6.

NEMIAC project’s particular design

Figure 8.7.

Model for optimizing nano-relays

Figure 8.8.

Micro-relay based adiabatic gate

Figure 8.9.

Circuit without non-adiabatic dissipation

Figure 8.10.

Circuit with non-adiabatic dissipation

Figure 8.11.

OR gate with bistable micro-relays

Figure 8.12.

“Dual-rail” adiabatic gate

Figure 8.13.

Comparison of field-effect transistor-based adiabatic solutions with micro-relay based adiabatic solutions

List of Tables

1. Dissipation Sources in Electronic Circuits

Table 1.1.

Table of transition between states

Table 1.2.

Pipeline functioning

Table 1.3.

Activity factor for the common gates

2. Thermodynamics and Information Theory

Table 2.1.

Energy efficiency of the optimal solution

3. Transistor Models in CMOS Technology

Table 3.1.

Contact potentials for common metals

Table 3.2.

Transistor model parameters

4. Practical and Theoretical Limits of CMOS Technology

Table 4.1.

Static energy and dynamic energy

5. Very Low Consumption at System Level

Table 5.1.

Static current and inputs

6. Reversible Computing and Quantum Computing

Table 6.1.

The truth table

Table 6.2.

The truth table

8. Micro-relay Based Technology

Table 8.1.

The main characteristics of the devices

Guide

Cover

Table of Contents

Begin Reading

Pages

C1

iii

iv

v

ix

x

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

321

322

323

324

G1

G2

G3

G4

Series Editor

Robert Baptist

Ultra Low Power Electronics and Adiabatic Solutions

Hervé Fanet

First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2016

The rights of Hervé Fanet to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2016941915

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-738-6

Introduction

Energy efficiency is currently at the center of electronic and computer evolution. In fact, the objective of all three layers of information and communication technologies (i.e. high-performance servers and computers, mobile systems and connected objects) is to improve energy efficiency, meaning to compute more while consuming less. The costs of cooling systems’ centers need to be restricted, the autonomy of portable systems needs to be increased and autonomous objects capable of functioning only on the energy that they recover need to be invented.

In these three cases, the power measurements are very different: kilowatts for servers, watts for mobile systems and micro-watts for connected objects. However, the mechanism that creates heat is the same in all the three cases and is due to the Joule effect. Two sources of dissipation have been identified: the first is the energy dissipated during the operations of charging and discharging the active electronic circuit capacitances and the second is the energy dissipated by currents that circulate permanently from the supply source to the ground when the circuits are in the sub-threshold regime. Therefore, it is necessary to fully understand these two phenomena in order to identify the causes that create heat and the possible paths for improvement. This is the objective of the first two chapters, which analyze the logic families. Thus, there appear to be links between heat creation and whether or not information is lost in logical operations. Chapter 3 provides the physical foundations necessary for understanding how the CMOS technology components in current use work.

Electronics has been confronting this crucial problem since the 2000s, as contrary to the initial predictions, it is no longer possible to pair the decrease in transistor size with a decrease in supply voltage. Therefore, the density of the dissipated power does not stop growing in an integrated circuit. In Chapters 4 and 5, more and more sophisticated optimization techniques are described, which allow us to more or less restrict heat creation and energy consumption, but no solution seems to be capable of providing the longawaited benefits. The analysis carried out in this book shows that for the current circuit architecture, the limit is intrinsic to semiconductor-based technologies, and that significant improvements can only be made by throwing the circuit architecture and component technology into question again. In order to achieve these objectives, new solutions (adiabatic computing and nano-relay technology) are proposed and described in Chapters 7 and 8. Chapter 5 is dedicated to reversible computing, considered by some to be the only solution for achieving extremely weak dissipation levels. It is also an introduction to quantum computing, which can be considered as an extension of reversible computing.

In summary, this book is an introduction to new possible directions in the evolution of electronic and computing systems. New directions will allow these systems to move beyond concepts that are dictated mainly by research on speed (which explains how electronics has evolved from the 1950s to the 2000s), to concepts that are inspired by the research of excellent energy efficiency.

1Dissipation Sources in Electronic Circuits

This chapter explains the origins of how heat is created in electronic circuits, and details its two fundamental components: dynamic power and static power. Dynamic power is the heat that is produced by charging and discharging of the circuit capacitors when the logical states change, whereas static power is the heat that is dissipated by the Joule effect when there is current leakage, or when currents below the threshold circulate within the circuit’s components. To fully understand how these mechanisms work, we need to analyze the different types of logical circuit structures. For this reason, we have dedicated a whole section to this subject. Logic based on complementary metal oxide semiconductor (CMOS) technology, which is used in more than 90% of current integrated circuits, will be explained in detail. The general principles put forward in this chapter will give the reader a fairly simple global view of the different aspects of heat production in circuits, and will allow them to understand the most important developments in semiconductor-based technology for reducing consumption. The more theoretical aspects will be discussed in Chapter 2 and the more detailed components of CMOS technology will also be discussed in Chapter 3.

1.1. Brief description of logic types

1.1.1.Boolean logic

In computer, audiovisual and control-command systems, data is binary-coded. This is true not only for the numbers, but also for the letters and, by extension, the sounds and images. Information processing systems perform the operations, from the simplest (addition) to the most complex (Fourier transformation). All of these are done by manipulating two symbols that are traditionally called “0” and “1”. In control-command systems, decisons are taken according to the value of logical functions, for example the value of “AND” when two simultaneous events occur. The mathematical model used in each case is Boolean algebra, invented by the Irish mathematician George Boole.

The simplest function is that of a single variable f(A). Four different functions can be defined according to the possible values of a variable A, as shown in Figure 1.1.

Figure 1.1.Boolean functions with one variable

The third function is a copy of the variable, and the fourth is the inverter function, written as .

For two-input variables, the number of functions possible is the most important, as there are 24 possible functions, as shown in Figure 1.2.

Figure 1.2.Boolean functions with two variables

The functions f2, f7 and f8 are very well known in electronics. They are, respectively, the AND, the exclusive OR and the OR functions. They are marked as:

– AND function:

A.B

– Exclusive OR function:

AB

– OR function:

A+B

The symbols translate a certain analogy with decimal calculations. For example:

However,

The point, which is the AND Boolean symbol, is often omitted to simplify the script.

All of the following are very easily executed using Boolean functions: binary arithmetic functions (used in current processors) and classical operations (addition, unsigned or floating point multiplication). For example, the addition of the bit i in Boolean algebra is written as Si and Ci is the carry:

[1.1]
[1.2]

We can now transpose the functions to the material level. The two states “0” and “1” are represented by physical quantities: two electrical voltages, for example. When the two voltages have two possible values 0 and VDD, the same symbols “0” and “1” are assigned indifferently to the logical and physical values. We can talk about the two states in the same way: the “on” state and the “off” state. We also note that the logical states can be materialized by different physical quantities of the electrical voltage: for instance the magnetic moment or the polarization. When a logical function is materialized, it is called a logic gate.

Boolean algebra comprises a large number of rules that are shown in the Truth Tables for the functions in question. These rules allow us to simplify the logical expressions in the Truth Tables. It is no longer indispensable to memorize these techniques, as they are all now integrated into synthesis tools. Let us make an exception for De Morgan’s rules, which are often useful for understanding how data logic gates work:

[1.3]
[1.4]

The elementary demonstration is based on the Truth Tables.

The two Boolean function decompositions are called “Minterm” and “Maxterm”, which are directly deducted from the Truth Tables. The simplest way to understand that is to use the example of figure 1.3, as it can serve as a generalization.

Figure 1.3.Example of a three-variable function

The Minterm decomposition is obtained by identifying the input values corresponding to a value of “1”:

The Maxterm decomposition is obtained by reversing this to identify the input values corresponding to the value of “0” as an output:

Reed–Muller’s decomposition is another decomposition that is fairly close to that of Taylor’s series function. It is based on the two equations given below:

Starting with the Minterm decomposition, it is possible to obtain an expression that only contains exclusive OR functions. Taking the example of the function given earlier, we obtain:

In this case, after simplifying, we obtain:

Generally, Reed–Muller’s decomposition presents the function as a sum within the exclusive OR of input variable products:

[1.5]

The factors are equal to 0 or 1.

To finish this introduction to Boolean algebra, let us introduce the notion of a Boolean function’s partial derivative:

[1.6]

This last notion, however, is not often used in the study of logical functions.

The decomposition of Boolean functions allows logical operations to materialize. Let us go back to the example given previously:

The basic “AND”, “OR” and inverter functions are assumed to be carried out by material blocks, which can be combined in any way. In practice, however, this property is not always guaranteed and the most frequent occurrence is where an output can only be applied to a limited number of inputs. This is what is meant by “fan-out”. The design of this simple function (Figure 1.4) shows the relative complexity of interconnect. This observation will be discussed in detail in the following section.

Knowing how many types of gates are necessary to carry out a particular function is a legitimate concern. The example given shows that the inverter, AND and OR functions are sufficient. In fact, we can dispense with the AND or OR functions by using De Morgan’s laws. The inverse function and the AND gate form the complete basis, from which we are able to generate all the possible functions. It is the same for inverter and the OR gate. Gates with more than two inputs can be easily performed based on two-input gates, but it is more useful to perform these gates directly if the technology permits.

Figure 1.4.Boolean material architecture

To finish this brief introduction, we note that the NAND gate, that is to say the inverted AND, is enough on its own to generate all of the possible functions because if an input is permanently maintained at level “1”, it brings about the inverter function.

The logical function synthesis is a technique that will not be explained in detail in this book. The aim of the synthesis is to create a circuit using as few gates as possible and to minimize the delay between the inputs and outputs.

1.1.2.Combinational and sequential logic

Combinational logic gates create a function that depends only on input logical variables. The outputs change only when the inputs change. However, a large number of functions cannot work using only combinational logic gates. They are called sequential functions.

For example, a counter is not a combinational Boolean function. If the input is, for example, a series of impulses, as shown in Figure 1.5, it is possible to associate the two electrical input levels with the two values of a Boolean variable. A simple Boolean function of this variable will not be able to give the value of the amount of impulses present during a given period. The function needs to have a memory of past events to be able to alter this amount according to each variation of the logical input variable.

Figure 1.5.Counter, a non-combinational function

It is quite easy to show that the majority of sequential systems can be conceived as a group of logic blocks, whose structure is shown in Figure 1.7

In this somewhat abstract diagram, the inputs are Boolean variables. The outputs are also Boolean, but of two different types: the first are used as inputs of other sequential systems, while the second are used as memory data inputs that are in the system considered. This data allows us to create the memory function necessary for sequential functioning. In the case of a counter, they memorize the amount of impulses already accounted for at a given time. Readers familiar with the concept of finite state machines will easily be able to adjust to the sequential logic.

Figure 1.6.Basic systems in sequential logic

A more complex case is given as an example to illustrate the concept of sequential logic. The example is that of how traffic lights are controlled. A main road intersects with a side street. A traffic light system is put in place with the following principles: the light is red on the side street but when a vehicle is detected (event D), which is a rare event, the lights on the main road turn to orange for a brief period and then to red, before going back to green after a longer time value. The lights of the side street are activated in a complementary way. Figure 1.7 illustrates the different possible cases that are compatible with reliable and fluid traffic:

– State 1: green light for the main road and orange light for the side street

– State 2: orange light for the main road and red light for the side street

– State 3: red light for the main road and orange light for the side street

– State 4: red light for the main road and green light for the side street

These four states are the only possible ones and are coded using two bits. They allow us to control the traffic lights.

The arrows indicate that the lights are conditionally changing from one state to another. When the state does not change, the arrow leaves the state and comes back to it. The transitional conditions are achieved using Boolean functions.

This is pointed out in Figure 1.7, but hereafter we will only describe the two states at the top right of the figure in detail. This is the most probable situation. When no car is detected or when there is no time lapse in a traffic jam in the side street, the lights stay in the same state. The words “and” and “or” are to be understood in the logical sense. When the long time period has elapsed and when a vehicle is detected on the side street, the light on the main street turns orange. In the same way, while the short time period has not elapsed, the light on the main road is orange and the light on the side street stays red. When the short time period has elapsed, the light on the main road turns red and the light on the side street stays green. The other changes in state can likewise be explained using the basic logic.

This diagram can be transformed into a more mathematical graph by replacing the phrasing of the conditions with rigorous logical conditions: vehicle detected (D), long period of time lapsed (LT) and short period of time lapsed (ST). Thus, we obtain the logical diagram of Figure 1.8. A logical variable is in the “on” state when the associated assumption is true. For example, if the assumption that “a vehicle has been detected on the side street” is true, D is “on”.

Figure 1.7.The functioning of the traffic lights model

Figure 1.8.Logical diagram of traffic lights

The way in which the lights move from one state to another when the logical conditions have been realized remains to be determined. A solution to this problem consists of using the SET-RESET-type data memory. This function can easily be carried out using classic logic gates. It comprises two inputs and two outputs. When the “SET” input is in the “on” state, the Q output is positioned in the “on” state, or stays in the “on" state if it is already there. When the “RESET” input moves to the “on” state, the Q output moves to the “off” state. The complementary output takes the complementary values of Q.

Figure 1.9.General diagram customized for the traffic lights model

The two SET–RESET circuits allow us to define the four states identified in the traffic lights model. These outputs are, therefore, the logical outputs of the sequential system. The logical functions command the “SET” and “RESET” inputs, which in this example are the output commands of the data memory identified in the general diagram of Figure 1.6. Figure 1.9 shows the instantiation of the diagram of Figure 1.6 for processing the problem of the traffic lights.

The method for defining the combinational logic necessary for the system consists of filling in the table, which allows it to move from one state to another by indicating the logical conditions, as well as the SET–RESET states. The X symbol indicates that the value of the logical state is indifferent. Table 1.1 is another way of expressing the same as the diagram of Figure 1.8.

Table 1.1.Table of transition between states

State

Input variables

Next state

Scales

Q

1

Q

2

D

ST

LT

Q

1

Q

2

S

1

R

1

S

2

R

2

0

0

0

X

0

0

0

0

0

0

0

0

0

0

X

1

0

0

0

0

0

0

0

0

1

X

0

0

0

0

0

0

0

0

0

1

X

1

0

1

0

0

1

0

0

1

X

0

X

0

1

0

0

0

0

0

1

X

1

X

1

1

1

0

0

0

1

1

0

X

0

1

0

0

0

0

1

1

1

0

X

1

1

0

0

0

0

1

1

1

1

X

0

1

1

0

0

0

0

1

1

1

X

1

1

0

0

0

0

1

1

0

X

0

X

1

0

0

0

0

0

1

0

X

1

X

0

0

0

1

0

0

The logical expressions are deducted from the functions S1, S2, R1, R2, as follows:

These functions can clearly be carried out by associating them with the AND and OR gates based on the signals available.

This signal is basically an asynchronous system, which means that the signals are not synchronized by an input clock signal; however, in many cases, the clock signal is necessary. The memory circuits are, therefore, synchronized by this signal and the general diagram becomes that shown in Figure 1.10.

Figure 1.10.Sequential synchronous circuit

To explain the synchronous circuits in more detail, it is first necessary to define memory circuits in a more precise way. A large number of this type of circuits have been created, but we can broadly divide them into two large families: the “latches” and the “flip-flops”.

Figure 1.11.Latches and flip-flops

The time-based diagram in Figure 1.11 illustrates how these two circuits function differently. The circuits have two inputs (clock and data) and one output. The output of the “latch” circuit is a simple data copy from when the clock is in the “on” state. It stays in the “off” state when the clock is in the “off” state. It is, therefore, sensitive to the clock level and data transitions. The “flip-flop” circuit is sensitive to the clock transitions (for example, the rising edges) and the output is a data copy. A slight delay is noted between detecting the input signals and the changes in the consecutive output during the signal running time, within the circuits themselves.

These two circuits are frequently used in logic and constitute a “data path” in digital circuits. Before giving a general description of this type of structure, we recall the guiding principle of the “pipeline”-type structure. Let us take a simple example of function computing:

Numbers a and b are binary-coded. The two types of computing architecture are shown in Figure 1.12.

Figure 1.12.Pipelined and non-pipelined architecture types

Let us now consider this circuit’s consecutive data groups, namely a1b1, a2b2 and a3b3, and we will assume that each operator is capable of the computation during a clock period. In order to simplify the problem, let us assume that the delays introduced by the flip-flops are negligible compared to the operator’s computational times, which are all assumed to be equal. Table 1.2 indicates when and from where the operation results are obtained.

Table 1.2.Pipeline functioning

Clock signal period

Adder (ADD)

Square (square)

Sinus (sin)

1

a

1

+

b

1

2

a

2

+

b

2

(

a

1

+

b

1

)

2

3

a

3

+

b

3

(

a

2

+

b

2

)

2

sin(

a

1

+

b

1

)

2

4

a

4

+

b

4

(

a

3

+

b

3

)

2

sin(

a

2

+

b

2

)

2

5

a

5

+

b

5

(

a

4

+

b

4

)

2

sin(

a

3

+

b

3

)

2

The values are input in clock rhythm. At the completion of five clock periods, the outputs are provided in three pairs of consecutive data. In the non-pipelined version, nothing can be input until the operators have completed the calculations, that is to say three times the clock period. The pipeline gain is, therefore, three.

Sequential pipelined systems are, therefore, used very frequently in electronics. Figure 1.13 shows an example of their general architecture. This will often be used as a structural reference throughout the rest of this work.

Figure 1.13.Sequential pipelined system

We note that the combinational logical outputs are not synchronized, so the delays between the inputs and the outputs depend on how many logic gates it has gone through. The diagram shows that certain logical output signals that are stored can be used in a combinatorial block. Other logical signals can be input externally. The traffic lights model allows us to understand the origins of these signals. The main function of the memory elements is to resynchronize the signals provided by the combinatorial blocks.

1.1.3.NMOS and PMOS transistors

The aim of this section is to give a very simple electrical description of the components that are used today in digital circuits. These components are miniature field-effect transistors whose dimensions are smaller than a micron. They are two different types (NMOS and PMOS) which work together complementarily.

As using an electrical voltage to code a state is the most natural way to continue, the question of asserting a function comes back to establishing the electrical connection between the logic gate’s output in question and a voltage source. This voltage source will need to be set to VDD for the “on” state and zero for the “off” state. Other values can be chosen, such as positive and negative for example; however, for the sake of simplicity, we have chosen to use the two values VDD and zero in the majority of operations. The diagram of Figure 1.14 illustrates how to establish a naturally conducted electrical connection, and to use a voltage-controlled switch.

Figure 1.14.Using switches to perform AND and OR functions

Let us assume that a positive voltage, when applied to a control electrode, turns the switch on and establishes the connection while a voltage of zero holds the switch open. Historically, the first logic gates were carried out using electromechanical switches in the 1930s. The need for miniaturization led to replacing this technology with that of vacuum tubes, which was then replaced by semiconductor-based technology from the 1950s onwards. It is only very recently that researchers have again begun to experiment with technology based on electro-mechanic relays, but this time in a miniature version. This option will be studied in more detail in Chapter 8.

The diagram of Figure 1.14 shows how the first logic event provides a “1” state as an output and works perfectly, but it cannot accept a second event. It is necessary to understand how the input logical states are configured for each event. In fact, let us start with a configuration in which the output is at a voltage of VDD. When the input voltage values change to zero, the output voltage either remains in the “on” state or evolves indefinitely depending on the gate’s electric charge. Therefore, it is necessary to predict how many inputs will electrically reset it to zero. This then leads us to the diagram shown in Figure 1.15. A second relay set allows us to connect the output to zero potential when the function is not asserted, wich is to say when the complementary function is.

Figure 1.15.The complete AND function

Note that it is necessary to have the input’s logical complements at the ready, such as the complement A · B as . This logic is called complementary logic and the general diagram explaining it is shown in Figure 1.16.

Figure 1.16.Complementary logic

When using complementary metal oxide semiconductor (CMOS) technology, performing logic gates in complementary logic becomes much more simple. This is because the complementary function that is connected to the zero potential, generally the circuit ground, can easily be obtained from the signals rather than from the complements. Moreover, it excludes all permanent conducting paths between the supply voltage and the ground, which, in principle, should reduce electrical consumption to zero. The remainder of this book will show that due to dynamic consumption and current leakage, this is not always the case.

Thanks to some very simple elements, we can easily describe CMOS technology based on two miniature switches: the NMOS transistor and the PMOS transistor. This will be described in more detail in Chapter 3.

Figure 1.17 describes very simply how NMOS and PMOS transistors work. These silicon-based devices allow a current to circulate between the input and the output called the source and the drain, depending on the tension applied to an isolated electrode in the conductive area called the gate. This gate is separated from the silicon conductive area by a very thin (approximately 1 nm) layer of oxide, which enables a powerful electrostatic effect. The diagram describes the two components by indicating the direction of the conventional conductive current, meaning that in which the positive charges go from the highest potential to the lowest potential. Note that the definitions of source and drain are interchangeable as the device is symmetrical. The same physical electrode can be the source or the drain depending on the direction in which the current is passing through the transistor.

Figure 1.17.NMOS and PMOS transistors

In the case of PMOS transistors, where the holes ensure the conduction, the conventional current has the same sign as the physical current. Moreover, as would be imagined from the definition of the terms drain and source, the currents circulate from the source to the drain. The condition to be fulfilled is that the voltage difference between the source and the gate must be greater than a positive value called the voltage threshold:

In fact, as will be discussed in Chapter 3, the voltage threshold of PMOS transistors is negative, but it is simpler to reason using positive values, voltages or currents and then to take the absolute value of the voltage threshold.

In the case of NMOS transistors, the conventional current circulates from the most positive voltage to the least positive voltage, but as the conduction is ensured by the electrons, the physical current goes in the opposite direction. This explains how the drain and the source are permutated as the physical current always circulates from the source to the drain. The condition to be fulfilled is that the voltage between the gate and the source must be greater than a positive voltage called the voltage threshold:

Note that the condition is based on the difference between the gate voltage and the source voltage, rather than between the source voltage and the gate voltage, as was the case for PMOS transistors. Those readers who are not very familiar with CMOS technology will no doubt need to spend some time mastering how to check the signs and how to localize the source and drain electrodes, using the diagram indicated in Figure 1.17, if need be.