Artificial Intelligence, Cybersecurity and Cyber Defence - Daniel Ventre - E-Book

Artificial Intelligence, Cybersecurity and Cyber Defence E-Book

Daniel Ventre

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The aim of the book is to analyse and understand the impacts of artificial intelligence in the fields of national security and defense; to identify the political, geopolitical, strategic issues of AI; to analyse its place in conflicts and cyberconflicts, and more generally in the various forms of violence; to explain the appropriation of artificial intelligence by military organizations, but also law enforcement agencies and the police; to discuss the questions that the development of artificial intelligence and its use raise in armies, police, intelligence agencies, at the tactical, operational and strategic levels.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 368

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title page

Copyright

Introduction

1 On the Origins of Artificial Intelligence

1.1. The birth of artificial intelligence (AI)

1.2. Characteristics of AI research

1.3. The sequences of AI history

1.4. The robot and robotics

1.5. Example of AI integration: the case of the CIA in the 1980s

2 Concepts and Discourses

2.1. Defining AI

2.2. Types of AI

2.3. Evolution of the themes over time

2.4. The stories generated by artificial intelligence

2.5. Political considerations

3 Artificial Intelligence and Defense Issues

3.1. Military policies and doctrines for AI: the American approach

3.2. Military AI in Russia

3.3. AI and the art of warfare

3.4. AI and cyber conflict

Conclusion

Appendices

Appendix 1: A Chronology of AI

Appendix 2: AI in

Joint Publications

(Department of Defense, United States)

Appendix 3: AI in the Guidelines and Instructions of the Department of Defense (United States)

Appendix 4: AI in U.S. Navy Instructions

Appendix 5: AI in U.S. Marine Corps Documents

Appendix 6: AI in U.S. Air Force Documents

References

Index

End User License Agreement

List of Illustrations

Chapter 1

Figure 1.1.

The first artificial intelligence computer program “Logic Theorist”,...

Figure 1.2.

The organizers of a “conference” (two-month program) at Dartmouth Co...

Figure 1.3.

Geographic distribution of Chinese universities investing in AI betw...

Figure 1.4.

Location of Akademgorodok (district of the city of Novosibirsk)

Figure 1.5.

Manchester University Robot24. For a color version of this figure, s...

Chapter 2

Figure 2.1.

Cloud of terms built up from the set of definitions12 in Table 2.1. ...

Figure 2.2.

Cloud of terms built from the expert system definitions in Table 2.2...

Figure 2.3.

Google Trends. Evolution of queries in the world related to “artific...

Figure 2.4.

Evolution of the presence of the concept of “expert systems” in AAAI...

Figure 2.5.

Evolution of the presence of the “Machine Learning” concept in AAAI ...

Figure 2.6.

Evolution of the presence of the “robot” concept in AAAI publication...

Figure 2.7.

Evolution of the presence of the “autonomous/autonomy” concept in AA...

Figure 2.8.

Evolution of the presence of the “military” concept in AAAI publicat...

Figure 2.9.

Evolution of the presence of the concept of “security” in AAAI publi...

Figure 2.10.

Evolution of the presence of the concept of “combat” in AAAI public...

Figure 2.11.

Evolution of the presence of the concepts “law” and “ethics” in AAA...

Figure 2.12.

Cloud of the 65 most commonly used terms in the OSTP Request for In...

Figure 2.13.

Cloud of the 65 most commonly used terms in the report “Preparing f...

Figure 2.14.

Cloud of the 65 most commonly used terms in “The national artificia...

Chapter 3

Figure 3.1.

QDR (1997, 2001, 2006, 2010, 2014) and NDS 2018. For a color version...

Figure 3.2.

QDRs from 1997 to 2018. For a color version of this figure, see www....

Figure 3.3.

The theme “robot” (robot, robotics) in NCARAI publications. Period 1...

Figure 3.4.

The theme “autonomy” (autonomy, autonomous) in NCARAI publications. ...

Figure 3.5.

The “robot” (robotics), “artificial intelligence” and “cyber” themes...

Figure 3.6.

The “artificial intelligence” and “cyber” themes in NCARAI publicati...

Figure 3.7.

AI is a component of cyberspace, which is itself a subset of informa...

Figure 3.8.

A simple modeling of cyberspace in three layers

Figure 3.9.

AI is present in each of the three layers of cyberspace. It extends ...

Figure 3.10.

Within each of the three layers of cyberspace, there are actors of ...

Figure 3.11.

In cyberspace, each of the layers has its own methods of hacking, a...

Figure 3.12.

If V represents cyberspace and R represents the non-cybernetic worl...

Figure 3.13.

An attack can exploit one or more of the layers of cyberspace to pr...

Figure 3.14.

Positioning AI in a cyberattack. AI malware will sometimes be oppos...

Figure 3.15.

Can an AI malware have a true panoptic or global view of its enviro...

Figure 3.16.

Attack and secure/defend, with or without AI

Figure 3.17.

Screenshot of the hkmap.live application (October 11, 2019). For a ...

Guide

Cover

Table of Contents

Title Page

Copyright

Introduction

Begin Reading

Conclusion

Appendices

Appendix 1: A Chronology of AI

Appendix 2: AI in Joint Publications (Department of Defense, United States)

Appendix 3: AI in the Guidelines and Instructions of the Department of Defense (United States)

Appendix 4: AI in U.S. Navy Instructions

Appendix 5: AI in U.S. Marine Corps Documents

Appendix 6: AI in U.S. Air Force Documents

References

Index

End User License Agreement

Pages

v

iii

iv

ix

x

xi

xii

xiii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

197

198

199

200

201

202

203

204

205

206

207

208

209

211

212

213

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

239

240

241

242

243

244

245

246

247

248

249

Cybersecurity Set

coordinated by

Daniel Ventre

Artificial Intelligence, Cybersecurity and Cyber Defense

Daniel Ventre

First published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2020

The rights of Daniel Ventre to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2020940262

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78630-467-4

1On the Origins of Artificial Intelligence

1.1. The birth of artificial intelligence (AI)

1.1.1. The 1950s–1970s in the United States

Alan Turing’s article, published in 1950 [TUR 50], which is one of the founding works in the field of AI, begins with these words: “I propose to consider the question, ‘Can machines think?’”

In 1955, “Logic Theorist”, considered to be the first AI program, was developed. This work was the result of cooperation between three researchers: a computer scientist (John Shaw) and two researchers from the humanities and social sciences (Herbert Simon and Allen Newell) [SIM 76]. The application was programmed using IPL language [STE 63], created within the RAND and the Carnegie Institute of Technology1 (a project that received funding from the US Air Force). Here we have the essential elements of AI research: a multidisciplinary approach, bringing together humanities and technology, a university investment and the presence of the military. It is important to note that although the program is described today as the first AI code, these three researchers never use the expression “artificial intelligence” or present their software as falling into this category. The expression “artificial intelligence” appeared in 1956, during a series of seminars organized at Dartmouth College by John McCarthy (Dartmouth College), Claude Shannon (Bell Telephone Laboratories), Marvin Minsky (Harvard University) and Nathaniel Rochester (IBM Corporation). The aim of this scientific event was to bring together a dozen or so researchers with the ambition of giving machines the ability to perform intelligent tasks and to program them to imitate human thought.

Figure 1.1. The first artificial intelligence computer program “Logic Theorist”, its designers, its results

Figure 1.2.The organizers of a “conference” (two-month program) at Dartmouth College on artificial intelligence in 1956

While the 1956 conference was a key moment in AI history, it was itself the result of earlier reflections by key players. McCarthy had attended the 1948 Symposium on Cerebral Mechanisms in Behavior, attended by Claude Shannon, Alan Turing and Karl Lashley, among others. This multidisciplinary symposium (mathematicians, psychologists, etc.) introduced discussions on the comparison between the brain and the computer. The introduction of the term “artificial intelligence” in 1956 was therefore the result of reflections that had matured over several years.

The text of the proposal for the “conference” of 19562, dated August 31, 1955, submitted for financial support from the Rockefeller Foundation for organizing the event, defines the content of the project and the very concept of artificial intelligence:

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

The project was more successful than expected because 10 people did not participate, but 43 (not including the four organizers)3, including Herbert Simon and John Nash. This audience was composed almost entirely of North Americans (United States, Canada), and two British people. In any case, it was entirely Anglophone.

In an article titled “Steps toward artificial intelligence” [MIN 61], Marvin Minsky described, in 1961, these early days of AI research and its main objectives:

“Our visitor4 might remain puzzled if he set out to find, and judge these monsters for himself. For he would find only a few machines (mostly ‘general-purpose’ computers, programmed for the moment to behave according to certain specifications) doing things that might claim any real intellectual status. Some would be proving mathematical theorems of rather undistinguished character. A few machines might be playing certain games, occasionally defeating their designers. Some might be distinguishing between hand-printed letters. Is this enough to justify so much interest, let alone deep concern? I believe that it is; that we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. But our purpose is not to guess about what the future may bring; it is only to try to describe and explain what seem now to be our first steps toward the construction of ‘artificial intelligence.’”

AI research is structured around new laboratories created in major universities. Stanford University created its AI laboratory in 1963. At MIT, AI was handled within the MAC project (Project on Mathematics and Computation), also created in 1963 with significant funding from ARPA.

From the very first years of its existence, the Stanford AI lab has had a defense perspective in its research. The ARPA, an agency of the US Department of Defense (DoD), subsidized the work through numerous programs. The research topics were therefore influenced by military needs, as in the case of Monte D. Callero’s thesis on “An adaptive command and control system utilizing heuristic learning processes” (1967), which aimed to develop an automated decision tool for the real-time allocation of defense missiles during armed conflicts. The researcher had to model a missile defense environment and build a decision system to improve its performance based on the experiment [EAR 73]. The influence of the defense agency grew over the years. By June 1973, the AI laboratory had 128 staff, two-thirds of whom were supported by ARPA [EAR 73].

This proximity to the defense department did not, however, condition all its work. In the 1971 semi-annual report [RAP 71] on AI research and applications, Stanford University described its prospects as follows:

“This field deals with the development of automatic systems, usually including general-purpose digital computers, that are able to carry out tasks normally considered to require human intelligence. Such systems would be capable of sensing the physical environment, solving problems, conceiving and executing plans, and improving their behavior with experience. Success in this research will lead to machines that could replace men in a variety of dangerous jobs or hostile environments, and therefore would have wide applicability for Government and industrial use.”

Research at MIT in the 1970s, although funded by the military, also remained broad in its scope. Presenting their work to ARPA in 1973, the researchers felt that they had reached a milestone that allowed them to envisage real applications of the theoretical work carried out until then. But these applications cannot be reduced to the field of defense alone:

“The results of a decade of work on Artificial Intelligence have brought us to the threshold of a new phase of knowledge-based programming – in which we can design computer systems that (1) react reasonably to significantly complicated situations and (2) perhaps more important for the future – interact intelligently with their operators when they encounter limitations, bugs, or insufficient information.”5

A few leads for new lines of research are then rejected:

“We believe that AI research can show the way to computer-based information systems far more capable than have ever been available. We plan to attack the area of Information Retrieval, both for traditional data-base and library problems and for more personal management information systems problems. The new Information Systems should help to increase the effectiveness of individuals who are responsible for complicated administrative structures, as well as for complex information problems of technical kinds. In particular, the services will be available and designed to be useable over the ARPANET, and will be designed to interact with the personal systems of other individuals to recognize conflicts, and arrange communication about common concerns.”

Along with university teams, the RAND Corporation also played a central role in the emergence of AI in the United States. Willis H. Ware’s book, RAND and the Information Evolution [WAR 08], is an invaluable resource for understanding the role the organization played in the development of AI in the United States as early as the 1960s. The book covers the period 1946–2008, which is divided into two periods, one from 1946 to 1983, during which research within the agency was organized into departments, before RAND reorganized itself around programmatic actions.

In 1963, Ware recalls, of the 20 contributions that comprise the first book on AI, published by Feigenbaum and Feldman, six were written by RAND researchers. The initial work of Allen Newell and Cliff Shaw, both RAND scholars, in collaboration with Herbert Simon (Carnegie Institute of Technology) laid several foundations for AI research on learning, proof of theories and knowledge representation, among other ideas. Ware also reminds us that AI work does not develop in isolation. Their researchers built on advances in computer science, including intensive uses of new computers such as JOHNNIAC, until the mid-1960s. AI research draws on advances in computer science and many other disciplines.

The history of AI may seem America-centric. But we cannot forget that work in robotics and computer science, in an attempt to understand how the brain works, all converging on AI, mobilized researchers well beyond the North American sphere at the same time.

In the United Kingdom, Grey Walter’s research, as early as the 1940s, became part of this international academic movement which was interested in the modeling of brain processes and the definition of intelligence. Grey Walter designed the “turtles” in Bristol in 1947, considered to be the first autonomous electronic robots (Luce Langevin describes the 1950s as a period when an “electronic animal menagerie” was built) [LAN 69]. These approaches were underpinned by the belief in a strong resemblance between the brain and the machine: “Physiologists admit as a working hypothesis that the functioning of the brain is that of a machine” [ASH 51].

Today’s international competition between major powers is not unrelated to the history of research and early development from the 1950s to the 1960s. China and Russia, in particular, did not wait until the 2000s or 2010s to invest in the field of AI research. Their present activity in this area is based on a model which, as in the case of the United States, is several decades old.

1.1.2. AI research in China

Artificial intelligence research in China began in the late 1960s [XIN 83].

According to Wang Jieshu’s analysis [JIE 18], the history of Chinese AI is closely linked to the history of the Soviet Union and the close relationship between the two countries. In the period 1970–1983, the main areas of research covered a broad spectrum of issues, such as:

– machine translation, a field which, in 1982, includes some of the following achievements:

- development of ECTA (English–Chinese Automatic Translation System) software,

- development of the English–Chinese Title Translation System,

- JF-111, A Universal Machine Translation System;

– natural language understanding;

– theorem proving;

– expert systems;

– robotics.

Nothing in this enumeration really distinguishes the orientation of Chinese research from its Western counterparts. Again, the approach here is multidisciplinary (mathematics, computer science, linguistics, medicine, automation, robotics, aeronautics, etc.).

In the 1980s, China already had a large number of publications, achievements, researchers and universities involved in AI research. Jieshu’s article only mentions civil applications, nothing is said about the military’s position on this research topic and its investment in universities.

On the basis of this article, we identify a set of universities involved (Table 1.1).

Table 1.1.Universities involved in AI between 1970 and 1983 in China (classified by city). Reconstructed from [XIN 83]

Name of University

City (Province)

Zhongshan University

Guangzhou (Guangdong)

Jilin University

Changchun (Jilin)

Zhejiang University

Hangzhou (Zhejiang)

Nanjing Technology College

Nanjing (Jiangsu)

Beijing Aeronautical Engineering Institute

Beijing

Beijing Academy of Traditional Chinese Medicine

Beijing

Institute of Automation, Academia Sinica

Beijing

Institute of Linguistics, Chinese Academy of Social Sciences

Beijing

Institute of System Science, Academia Sinica

Beijing

Mathematics Institute, Academia Sinica

Beijing

Qinghua University

Beijing

Science-Technology Information Institute and Computer Technology Institute, Academia Sinica

Beijing

Shanghai Institute of Computing Technology

Shanghai

Shenyang Institute of Automation, Academia Sinica

Shenyang (Liaoning)

Wuhan University

Wuhan (Hubei)

Figure 1.3.Geographic distribution of Chinese universities investing in AI between 1970 and 1983. Reconstructed from [XIN 83]

AI research in China thus took shape at the same time as it did in the West and has been structured around universities of excellence. This history serves as a basis for China’s current ambitions, which are still expressed in research programs as well as in economic, industrial, societal and political projects.

1.1.3. AI research in Russia

The history of AI in Russia follows roughly the same chronology as that of the United States or the West. As early as the early 1960s, Western delegations visiting Russia noticed the presence of research teams on AI themes.

The report by E.A. Feigenbaum, who visited the USSR from June to July 1960 as a member of the American delegation to the First Congress of the International Federation of Automatic Control (IFAC) [FEI 60] said:

“The program consisted of a number of welcoming speeches, and an address by the well-known scientist and Chairman of the USSR National Committee for Automatic Control, V.A. Trapeznikov.”

“The Soviet Deputy Premier talked on the problems which automation would bring to ‘certain societies’ which were not well equipped to handle this kind of technological change – change which would bring unemployment, re-education problems […]”

“In general, Soviet papers could be characterized as oriented toward theory, while papers of Western delegates mixed theory and application.”

“In conjunction with the conference, various research institutes, educational institutions, and plants were officially opened for technical excursions by the delegates […] By far, the most popular tour was one to the Institute of Automation and Telemechanics in Moscow.”

In the Soviet Union, AI was one of the components of cybernetics, in the same way as information theory, computer science or the study of military C2 [LEV 64]. Cybernetics, which appeared in the USSR in 1953, was a new and broad field, organized around various research communities which come together, in particular, at conferences dedicated to cybernetics and in numerous academic publications from the early 1960s. Military cybernetics became a sub-domain of cybernetics.

An article published in the journal Science on August 27, 1965 [KOP 65] introduced a new city of science, which had just been built, in Siberia: Akademgorodok, located in the suburbs of Novosibirk (Siberia).

The work of Paul R. Josephson [JOS 97] gives a whole chapter to the history of the birth of AI in the city of Akademgorodok, in the middle of the Soviet period (1960–1970). For it was there that AI in Russia was born. A Russian research community centered on AI was created there in the 1970s, with a university research center, “clubs” (“Akademia” club on Artificial Intelligence)6 and a Council for AI (the Artificial Intelligence Council)7, etc.

The city, now considered one of Russia’s Silicon Valleys (along with Moscow and St. Petersburg), is said to be home to Russia’s “cyberwar soldiers” [CLA 17]. Akademgorodok is home to a technopark, concentrating 24% of the revenue of all Russian technoparks, and 22% of the companies hosted in Russia in technoparks [LOG 16]. The Akademgorodok technocenter is currently reported to host 340 companies, 115 start-ups and nearly 10,000 employees. This ecosystem is complemented by the many university research centers that have made the city famous and unique, due to their high concentration.

In the mid-1970s, the Soviet Union envisaged the use of networked information technology as a tool for controlling, managing and planning for the Soviet economy. The project envisaged at that time was to link major production and political centers using a vast network of computers. Moscow would be the hub, but it would also pass through Leningrad (as St. Petersburg was called at the time) and Kiev. Implementation was to start around 1975 and be fully operational by 1990. Western, American (Control Data Corporation) and British (International Computers, Ltd.) companies were even involved in this project [LIE 73]. The computer and computer science, in the broad sense, was a tool of the Soviet political project, as well as posing a challenge to America which at that point faced difficulties in implementing such networks on a large scale. Soviet technological development was based on a policy of transfer from the United States to the USSR from the end of the 1950s and accelerated from 1972 onwards under the Nixon administration [ROD 81]. The acquisition of foreign technology, especially in the field of information technology, by the USSR, was carried out through legal (sales authorized by the US government) and illegal (black market and copying) channels. AI was part of this Soviet “cybernetics” project. However, a report by the American Department of Defense in 1990 estimated that the Soviet Union had lower capabilities than America, despite research efforts and special attention to AI applications in the civil and military fields [DOD 90]:

“The Soviet Union lags behind the United States significantly in machine intelligence and robotics. They do have a good theoretical understanding of the area and can show creativity in applying the technology to selected space and military applications. Soviet R&D on artificial intelligence (AI), under the auspices of the Academy of Sciences of the USSR, includes work on machine vision and machine learning. The value of machine intelligence to battlefield operations as well as to the domestic economy has been recognized by the Soviet government.”

So, while the Soviet Union does not appear to have been truly competing with US capabilities at the time, it was nonetheless a player that added to the competitive landscape facing the United States. AI and robotics, in their various dimensions (research, development, industrialization), emerged dynamically in several countries and regions of the world: the report cites France, Europe and Japan, among others.

Research in the USSR was not isolated from the rest of the world. The USSR organized international AI conferences: for example, in Leningrad in April 19778 and in October 19839. Its research projects and achievements were in fields relatively similar to the rest of the world: applications for automatic translation and understanding natural language (in 1982, the “Etap–1” project was created – Experimental System for Automated Translation from French into Russian)10.

Figure 1.4.Location of Akademgorodok (district of the city of Novosibirsk)

1.1.4. AI research in Japan

The history of AI research in Japan began in the 1960s at Kyoto University, with a team formed around Professor Toshiyuki Sakai, who worked in three areas: computer vision, speech processing and natural language. In 1970, the team presented the first facial recognition system at the Osaka exhibition. During the decade, several teams took shape, at Kyushu University (around Professor Toshihiko Kurihara, with work on kanji-kana conversion systems), at the University of Osaka (with Professor Kokichi Tanaka on knowledge processing issues), at the University of Tokyo and in corporate laboratories such as NTT. However, the emergence of these various teams did not yet constitute a true Japanese AI research community. This took shape around a community of students at the University of Tokyo, the IAUEO, from which the pioneers of Japanese AI such as Hideyuki Nakashima, Koichi Hori and Hitoshi Matsubara emerged.

The government provided several hundred million dollars in funding for long-term university research programs [DON 77]. In 1971, the government launched the PIPS (Pattern Information Processing System) project, which was funded at $100 million over eight years. The computational requirements necessary to achieve the objectives of PIPS necessitated the development of new electronic chips. This phase was financed by the Japanese government as early as 1973, through a new project.

Other substantial funding was mobilized to support research in image processing, speech and natural language processing.

AI seemed to really take off in the mid-1980s [MIZ 04, NIS 12]11, after Japan launched its fifth-generation computer program, in 1982. The Institute for New Generation Computer Technology (ICOT) was the R&D arm of the national fifth-generation computer science program. Among others, ICOT produced the programming language KLIC and the legal reasoning system HELIC-II. In 1983, the ICOT had less than 50 researchers and the research themes, focused around the central project of designing the world’s largest computer by 1990, directly concerned AI (e.g. automatic translation systems, automatic response systems, understanding speech, understanding images, problem-solving systems and logic programming machines, etc.).

In 1986, the Japanese Society for Artificial Intelligence (JSAI) was founded. In 1990, the association created the Pacific Rim International Conference on Artificial Intelligence (PRICAI), which aims to structure AI research in this part of the world, thus complementing or counterbalancing the Western initiatives of the IJCAI and the European Conference on Artificial Intelligence.

Alongside university research, industrial R&D activities have made a major contribution to the development of Japanese artificial intelligence. Major industrial groups set up teams dedicated to AI (NTT, Hitachi, Fujitsu, Sony, Honda, etc., are big names in the industry that invested in this field). Some of their developments received a lot of media coverage, such as AIBO (Sony), ASIMO (Honda), TAKMI – Analysis and Knowledge Mining (IBM Research Tokyo), facial recognition tools and oral translation applications for mobiles (NEC) and the humanoid robot HRP-4C (capable of singing and dancing).

Since the 1980s, the Japanese government has maintained its investment in AI, but international competition is now raging and several major powers are outperforming Japan in terms of numbers of scientific publications: while China published 41,000 articles in the field between 2011 and 2015, the United States published 25,500 and Japan 11,700. This phenomenon has been repeated in the industrial field: the United States is said to have a thousand companies in the field, while Japan has 200–30012.

Although Japan has been and remains one of the leaders in robotics and AI applied to this field, it is not only because it shows qualities as an integrator of the multiple aspects required (electronics, electricity, computing, automation, etc.), but also because it is a leader in many of these fields; Japan has particularly been a leader in electronics. Robotics in the 1980s required more skills in electronics, microelectronics and mechanics than in computer science.

1.1.5. AI research in France

In the 1950s and 1960s, interest in intelligent machines spread to many countries around the world. France was one of the players in this internationalization of research. For example, we can cite the following:

– Pierre de Latil’s reflections on artificial thinking [DEL 53];

– the work of Albert Ducrocq (the son of a soldier, he studied political science and electronics and was later a journalist and essayist, and qualified as a cybernetician) who invented the “electronic fox”, an autonomous robot on wheels, and who inspired the achievements in the 1970s of Bruno Lussato, inventor of zebulons, autonomous computerized handling robots. Albert Ducrocq published several works dealing with robots, weapons and AI such as

Les armes de demain

(the weapons of tomorrow) (Berger-Levrault, 1949),

Appareils et cerveaux électroniques

(electronic devices and brains) (Hachette, 1952),

L

ère des robots

(the age of robots) (Julliard, 1953) and

Découverte de la cybernétique

(discovery of cybernetics) (Julliard, 1955). He published many other works before his death in 2001;

– writings on thinking machines by Paul Chauchard [CHA 50], Paul Cossa [COS 54], Louis Couffignal [COU 52], Dominique Dubarle [DUB 48], or on the robot, with the writings of Albert Béguin [BÉG 50].

Questions are tackled from a variety of viewpoints: mathematicians, cyberneticians, philosophers, electronics engineers, etc. Are humans machines or robots? Can the brain be reproduced in a machine? Can the machine think, does it have a soul? What is a machine? Can we reduce the mechanism of thought or the functioning of the brain to algorithms? Are the brain and the body simple mechanics?

Louis Couffignal [COU 52] defined the machine as “an entire set of inanimate, or even, exceptionally, animate beings capable of replacing man in the execution of a set of operations proposed by man”.

He listed the categories of machines: machines that can add and write, machines that can read and choose, calculating machines and thinking machines.