Virtual Reality and Augmented Reality -  - E-Book

Virtual Reality and Augmented Reality E-Book

0,0
144,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Virtual and Augmented Reality have existed for a long time but were stuck to the research world or to some large manufacturing companies. With the appearance of low-cost devices, it is expected a number of new applications, including for the general audience. This book aims at making a statement about those novelties as well as distinguishing them from the complexes challenges they raise by proposing real use cases, replacing those recent evolutions through the VR/AR dynamic and by providing some perspective for the years to come.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 596

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Preface

Introduction

I.1. The origins of virtual reality

I.2. Introduction to the basic concepts

I.3. The emergence of virtual reality

I.4. The contents of this book

I.5. Bibliography

1 New Applications

1.1. New industrial applications

1.2. Computer-assisted surgery

1.3. Sustainable cities

1.4. Innovative, integrative and adaptive societies

1.5. Bibliography

2 The Democratization of VR-AR

2.1. New equipment

2.2. New software

2.3. Bibliography

3 Complexity and Scientific Challenges

3.1. Introduction: complexity

3.2. The real–virtual relationship in augmented reality

3.3. Complexity and scientific challenges of 3D interaction

3.4. Visual perception

3.5. Evaluation

3.6. Bibliography

4 Towards VE that are More Closely Related to the Real World

4.1. “Tough” scientific challenges for AR

4.2. Topics in AR that are rarely or never approached

4.3. Spatial augmented reality

4.4. Presence in augmented reality

4.5. 3D interaction on tactile surfaces

4.6. Bibliography

5 Scientific and Technical Prospects

5.1. The promised revolution in the field of entertainment

5.2. Brain-computer interfaces

5.3. Alternative perceptions in virtual reality

5.4. Bibliography

6 The Challenges and Risks of Democratization of VR-AR

6.1. Introduction

6.2. Health and comfort problems

6.3. Solutions to avoid discomfort and unease

6.4. Conclusion

6.5. Bibliography

Conclusion: Where Will VR-AR be in 10 Years?

Postface

Glossary

List of Authors

Index

End User License Agreement

List of Tables

2 The Democratization of VR-AR

Table 2.1. Description of optical see-through AR systems

3 Complexity and Scientific Challenges

Table 3.1. Overview of an existing glossary of “simulator sickness”

Table 3.2. Articulate objectives and approaches depending on the level of maturity and nature of the system being evaluated

Table 3.3. Criteria and questions related to the evaluation based on the purpose of VE/AR being studied

Table 3.4. Benchmark values proposed by Rouanet and Corroyer [COR 94] for different situations of data-analysis. The values between parentheses are the values initially proposed by [COH 77]

Table 3.5. Examples for measurements used in evaluations that mobilize mixed-reality systems

Table 3.6. Interpretation of the SUS score in terms of usability and acceptability (adapted from Bangor et al. (2009; 2008))

List of Illustrations

Introduction

Figure I.1. a) Diagram of Giovani Fontana’s magic lantern, b) using the magic lantern. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure I.2. A still from the movie The Lawnmower Man

Figure I.3. A still from the movie Minority Report

Figure I.4. Projet S.E.N.S

Figure I.5. Evolution of the field of virtual reality. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

1 New Applications

Figure 1.1. Courbe de Hype 2017. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.2. The layout for the command post of a ship at DCNS (© CLARTE – NAVAL Group (ex DCNS))

Figure 1.3. Project review at NEXTER (© Nexter)

Figure 1.4. Ergonomic study on a Lactalis production post (© CLARTE). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.5. Ergonomic study of an Inergy production post (© AFERGO)

Figure 1.6. Training to land and take-off on a helicopter carrier (in choppy conditions) (© CLARTE - NAVAL Group (ex DCNS))

Figure 1.7. The MIRA application from the Airbus Innovation Group (© Airbus Group). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.8. The ARPI application to control panels (25 m*25 m) STX (© CLARTE - STX)

Figure 1.9. Left: digital model of the liver and its vascular network, created using a patient’s CT scan and adapted to real-time simulations. Center: simulation of electrophysiological activity of the heart, parametrized using patient data. Right: simulation of the cryoablation of a renal tumor and its calculation grid (in yellow and red). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.10. General principle of laparoscopic surgery: miniaturized instruments and a camera are introduced into the abdomen through small incisions. The surgeon then operates using a monitor that displays what the camera captures. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.11. Micro-surgery is also a field of application where simulations can be developed for learning. Here, we have the simulation of a cataract operation and its force-feedback system (© HelpMeSee)

Figure 1.12. Vascular surgery uses microsurgery which navigates the vascular network, until it reaches the pathology. The visualization of the intervention is carried out through a real-time X-ray imaging system called fluoroscopy

Figure 1.13. Modeling anatomy, as well as creating geometric models adapted to different calculations, is the first key step in simulation for learning. It can provide different levels of detail. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.14. Left: finite element mesh of the liver, made up of tetrahedra and hexahedra. Center: simulation of the interaction between a radiofrequency electrode and the liver, which requires computing strains and calculating the contacts between the instrument and the organ. Right: visual model of the liver, with realistic rendering using textures and different lighting models (shaders). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.15. Left: finite element mesh of the liver, composed of 1500 tetrahedra, with a computation time of 8 ms (i.e. 125 images/second). Center: finite element mesh of the liver composed of 4700 tetrahedra, with a computation time of 25 ms (i.e. 40 images/second). Right: finite element mesh of the liver composed of 21,600 tetrahedra, with a computation time of 140 ms (i.e. 7 images/second)

Figure 1.16. Examples of interactions between the virtual models of organs and the instruments. Left: simulation of the navigation of a catheter in vascular surgery. Center: simulation of an incision in laparoscopic surgery. Right: simulation of a suture in laparoscopic surgery. The interactions are complex in all three cases and in the first and the last examples, the interactions involve other deformable structures apart from the organ itself (© Mentice (left), 3D systems (LAP Mentor) (right))

Figure 1.17. Examples of medical images used for diagnosis or planning. Left: image taken from a CT scan. Center: image from an MRI scan. Right: labeled image indicating the different anatomical structures visible in the image

Figure 1.18. Planning of a hepatic surgery in virtual reality, using 3D reconstructions of the patient’s anatomy. Here, the regions of the liver containing the tumor are clearly marked in order to estimate the liver volume, which will remain an essential criterion for post-operation survival (© IRCAD & Visible Patient). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.19. Examples of the simulations associated with surgical planning. Left: patient-specific simulation of a vascular surgery. Center: simulating the insertion of an electrode in a deformable model of the brain to plan a deep brain stimulation. Right: combining a biomechanical model and an electrophysiological model of the heart, configured using data recorded from a patient. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.20. Augmented reality in the operation theater. Left: hybrid operation theatre integrating different imaging systems that allow the visualization of the patient’s internal anatomy during an operation. Center: 3D reconstruction of the vertebra before a vertebral column surgery. Right: view in AR facilitating the positioning of a vertebral screw (© Philips)

Figure 1.21. Example for the use of a navigation system in surgery. We can see the cameras used to track the movement of the instruments and the markers situated on the instruments and/or on the organ to facilitate the repositioning of the virtual view, depending on the surgical view. This approach does not manage deformations in the organ nor the visual overlapping of the virtual model and the real image (© CAScination)

Figure 1.22. 3D reconstruction of the surface of the liver, using a stereo-endoscopic image. Left: left image with extraction of points of interest (in green). Center: partial 3D reconstruction of the liver based on these points of interest. Right: right image with extraction of points of interest (in green). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.23. Use of radio-opaque markers to match the pre-operative and intra-operative data. Left: CyberKnife system for radiotherapy. Center: pre-operative image showing the tumor and the markers placed on the periphery. Right: double X-ray beam to identify the 3D position of the markers during the intervention (© Accuray Incorporated)

Figure 1.24. Different steps in a hepatic surgery, clearly showing the amplitude of deformations of the liver. We can see that despite the significant deformation, the virtual model remains correctly positioned on the laparoscopic image. The images from top to bottom show the different anatomical structures that are easy to visualize or hide, depending on the surgeon’s requirements. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 1.25. Google Maps: 2D map (© Google Maps)

Figure 1.26. Google Maps: 3D view (© Google Maps)

Figure 1.27. Google Maps: Streetview (© Google Maps)

Figure 1.28. AR application from Here indicating the route to follow (© Here)

Figure 1.29. An AR view showing Points of Interest (© Nokia Live)

Figure 1.30. Old synthetic image (© Archivideo)

Figure 1.31. Modern-day synthetic image (© Kreaction)

Figure 1.32. Virtual reality outdoors (© Rennes Métropole)

Figure 1.33. Augmented reality associated with a ground-plan (© Artikel)

Figure 1.34. Ixina Kitchen (© Ixina - Dassault Systèmes)

Figure 1.35. Image in an immersive room (© IRISA)

Figure 1.36. Data base from aerial data (© Rennes Métropole)

Figure 1.37. A virtual Paris, Archivideo (© Archivideo)

Figure 1.38. Google Maps (© Google Maps)

Figure 1.39. RennesCraft (© Rennes Métropole - Hit Combo)

Figure 1.40. RennesCraft (© Rennes Métropole - Hit Combo)

Figure 1.41. HUMANS: character-centered approach (© EMISSIVE)

Figure 1.42. #(FIVE,SEVEN): Approach centered on using predefined scenarios (© IRISA)

Figure 1.43. 3D-augmented Ballet, Biarritz, 2010 [CLA 12] (© Frédéric Nery)

Figure 1.44. L’arbre Intégral (The Integral Tree) (2016) [GAE 16]

Figure 1.45. An interactive reconstitution of The Boullongne, a 17th-Century ship (© Inria)

Figure 1.46. Interaction with a tangible object: the gallic weight (© IRISA)

2 The Democratization of VR-AR

Figure 2.1. Example of rigid bodies (© Wikimedia Commons - Vasquez88).

Figure 2.2. Samsung Gear VR (© Samsung)

Figure 2.3. Oculus Rift V1 (© Oculus)

Figure 2.4. HTC Vive (© HTC)

Figure 2.5. Microsoft HoloLens (© Microsoft)

Figure 2.6. Example of a visiocube with five faces: the SAS

3

(© CLARTE)

Figure 2.7. Reality Center marketed by Barco (© Barco)

Figure 2.8. Comparison of the field of vision for different optical see-through vision systems (© Wikimedia Commons - Mark Wagner). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 2.9. Interaction cycle starting from the user’s action until the perception of the result of this action. Developing a VR-AR application requires collecting data from the input devices, processing this information and deducing the sensory feedback to produce, then transmitting this information to the output devices

Figure 2.10. Example of the interaction in virtual reality between (a) a real defender, fitted with a VR headset, and (b) a virtual attacker who may or may not use a body swerve to go around him

Figure 2.11. Example of the graphic editor of the game engine Unity, which makes it possible to easily manage the visual layout of a scene, the sound, the camera, placement, etc. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 2.12. Example for the configuration of a five-face peripheral visualization device, using MiddleVR. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

3 Complexity and Scientific Challenges

Figure 3.1. Interactive fracture in the material

Figure 3.2. Interactive fracture in the material

Figure 3.3. The steps in the detection of collision

Figure 3.4. Above: 512 non-convex objects fall on a plane floor. Below: 500 objects are progressively inserted. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 3.5. Interaction with a sheet that falls on its side on an irregular surface

Figure 3.6. a) Illustration of a reconstruction project at Roland-Garros by Digital District. b) Populating a scene in a street in the film Florence Foster Jenkins by Union VFX. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 3.7. Pyramid of behaviors

Figure 3.8. The impact of interaction with the environment on the goals and internal state of a virtual character [PAR 09]. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 3.9. Action-perception cycle

Figure 3.10. Examples of potential perceptual mismatches. Left, in a projection-based system, objects exhibiting negative parallax can be wrongly occluded by real objects (user’s hand) as the projection of the virtual objects in the screen can be occluded. Right, in obtrusive displays, if the user’s body is not correctly tracked, proprioceptive and visual channels could differ, which would require motor recalibration

Figure 3.11. The Scale 1 (left) and Able 7D (right) interfaces from Haption (©PSA Peugeot Citroën and Haption)

Figure 3.12. IHS10 Force feedback gloves (left) and MANDARIN (right) from CEA (© CEA)

Figure 3.13. The multimodal technical gesture training platform -SKILLS (© CEA)

Figure 3.14. Milgram and Kishini’s reality–virtuality continuum [MIL 94]

Figure 3.15. Interactions and transfers between the real world and virtual worlds

Figure 3.16. Pose calculation (see the beginning of section 3.2.2)

Figure 3.17. Using a marker to locate a camera. Markers facilitate the pose computation of a camera, but cannot be used for all applications (© Daniel Wagner)

Figure 3.18. Image-based spatial localization. If the spatial position of several points in the scene is known, as well as their reprojections in the image, it is then possible to localize the camera in the same reference as these points

Figure 3.19. “Points of interest” detected automatically in two images of the same scene. These points correspond to prominent sites in the images and most of them correspond to the same physical points in both images. They may be used, for example, to localize the camera if their positions in 3D are known. Many points are detected on certain objects and very few on others, for instance, here, on the tablecloth and mug, respectively. Objects such as the mug are thus more difficult to use to localize the camera. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 3.20. Realistic rendering. Once the point of view is known (a), the parts of the virtual objects situated behind the real objects must be identified and deleted from the final rendering (b). The light interactions between the real and virtual must also be rendered. Here, removing hidden sections and throwing a shadow onto the car helps the user perceive it in the desired position. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 3.21. A representation of the three main scientific challenges that arise within the 3D interaction loop

Figure 3.22. Illustration of an interaction technique called the “Thing” that uses a tactile tablet in order to capture the movements of the hand and animate a virtual hand [ACH 15]

Figure 3.23. Illustration of a new category of interfaces where the whole body of the user is used to interact with virtual worlds. The interface, called “Joyman”, uses human equilibrioception to establish the law of control that makes it possible to navigate virtual environments [MAR 11]

Figure 3.24. Photograph of new physical models that allow the modeling of virtual environments made up of solid, deformable and liquid objects. They also allow the user to interact with two haptic devices [CIR 11a]

Figure 3.25. Visual fatigue and discomfort: context and terminology, as given by [URV 13b]

Figure 3.26. Accommodation-vergence conflict

Figure 3.27. Sensory and cognitive constraints in stereoscopic vision, according to [URV 13b]

Figure 3.28. The count and distribution of usability problems identified in two virtual environments based on the technique used to identify the problems: expert inspection, documentary inspection and user test; from [SCH 14]

4 Towards VE that are More Closely Related to the Real World

Figure 4.1. The Microsoft HoloLens augmented reality headset (© WikiMedia)

Figure 4.2. Adaptation of the projection to the viewer’s point of view: demonstration of optical camouflage by Tachi Lab in 2003 [INA 03]

Figure 4.3. Absolute and relative poses. (a) Absolute pose computation makes it possible to insert virtual segments, with respect to the real scene, but requires having at least the geometric reference data. (b) Relative pose computation is simpler to implement, as it directly estimates the geometry of the scene in the form of primitives (points in this case). However, this only allows the insertion of virtual elements in a marker that is dependent on the session and therefore different each time

Figure 4.4. The importance of matching for spatial localization. If we can match elements from an image with those of a reference image, and if the spatial position of these elements is known, we can then calculate the camera’s pose for this image

Figure 4.5. Interaction using a graphic interface. Image: Diota (© Diota)

Figure 4.6. A combination of movements of the telephone and tactile gestures for the manipulation of 3D objects in mobile augmented reality (according to [MAR 14b]). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 4.7. Examples of manipulation using tangible objects. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 4.8. Interactions in spatial augmented reality. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 4.9. InForm [FOL 13] – modification of the shape of an augmented object (© MIT). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 4.10. tBox 3D manipulation tool for tactile screens [COH 11] (© Inria – Potioc). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 4.11. Tactile interaction on a large screen for the visualization of scientific data [KLE 12] (© Inria – AVIZ). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 4.12. Cubtile (top) and Toucheo (bottom), two examples of devices that use tactile interaction for the manipulation of 3D objects (© Immersion – Inria Potioc). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

5 Scientific and Technical Prospects

Figure 5.1. Left: a haptic editor that makes it possible to associate a generic haptic channel with an immersive media. Right: immersive experiences are not only limited to totally immersive experiences, but can also take into account and occupy the user’s personal space (© Technicolor). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 5.2. A 360° video becomes a social virtual reality experience. On the left: the user is embodied by a character rendered in real life – hand lowered on the right – added to the video. The user’s reflection can be seen in the astronaut’s helmet – which is completely a part of the video. On the right: a multi-user experience, bringing together users from different points of view, in different forms within the film (Orbit2, © Technicolor)

Figure 5.3. Parallax, illustrated here by the relative movement of one object with respect to others, through a simple lateral translation of the camera. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 5.4. MindShooter, a video game inspired by the famous Japanese game “Space Invaders” and controlled here by a reactive BCI using SSVEP [LEG 13]

Figure 5.5. The multi-player BCI game “BrainArena”: both players are fitted with EEG helmets and can score goals to the left of right together, or can play against one another, by imagining movements of the left or right hand

Figure 5.6. The VR application “Virtual Dagoba”: the user is fitted with a wireless EEG helmet and is immersed in an immersion room (Immersia, IRISA/Inria, Rennes) and a 3D scene inspired by the universe of the “Star Wars” films. The user can take the spacecraft up by concentrating (or bring it down by relaxing). For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 5.7. Virtual reality simulator for haptic and BCI-based training: virtual aids (visual and haptic feedback) are activated based on the user’s cognitive load and make it possible to guide the user in their task of inserting a needle to carry out a biopsy on a tumor in the liver

Figure 5.8. Concept of pseudo-haptic texture: simulation of a bump over which the user moves the mouse cursor

Figure 5.9. Simulation of the rigidity of an object through pseudo-haptic feedback [LÉC 00]

Figure 5.10. The “Meta Cookie+” system proposed by [NAR 11]. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Figure 5.11. “Haptic Motion”: experimental device (taken from [OUA 14])

Figure 5.12. Examples for body-ownership illusions (from [KIL 15]): Pinocchio Illusion (left) and Rubber Hand Illusion (right)

Figure 5.13. Device used to influence the feeling of virtual incarnation of a participant using an HMD and a motion-tracking system (taken from [BAN 16])

6 The Challenges and Risks of Democratization of VR-AR

Figure 6.1. The classic “perception, decision, action” loop

Figure 6.2. Sensorimotor incoherences disrupt the level of immersion and sensorimotor

Guide

Cover

Table of Contents

Begin Reading

Pages

C1

iii

iv

v

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

xxv

xxvi

xxvii

xxviii

xxix

xxx

xxxi

xxxii

xxxiii

xxxiv

xxxv

xxxvi

xxxvii

xxxviii

xxxix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

303

304

305

306

307

309

310

311

312

313

315

316

317

318

319

321

322

G1

G2

G3

G4

G5

G6

G7

G8

e1

Series Editor

Jean-Charles Pomerol

Virtual Reality and Augmented Reality

Myths and Realities

Edited by

Bruno Arnaldi

Pascal Guitton

Guillaume Moreau

First published 2018 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2018

The rights of Bruno Arnaldi, Pascal Guitton and Guillaume Moreau to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2018930832

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78630-105-5

Preface

“Virtual reality”, a strange oxymoron, is back in common use in the media, like in the early 1990s, a quarter of a century ago! A period that today’s young innovators are not very familiar with. Yes, at the risk of shocking some people, we must reveal that this science and the associated techniques are no invention of the 21st Century but date back well into the previous century!

Today, we are witnessing the renaissance and democratization of virtual reality, with its share of relevant and effective applications, as well as a host of technological difficulties that no developer can afford to ignore. Some enthusiasts wish to create new applications and believe that skills in innovation are all that is required. However, this approach is doomed to failure unless it is preceded by a detailed study of the state of the art of virtual reality techniques and a knowledge of the fundamentals and existing uses. Many young entrepreneurs have contacted me, thinking they have a novel virtual reality application when they don’t even have a basic understanding of this science or its techniques. I have had to tell others, “but this already exists in the industry, it is already being marked by companies that are over twenty years old”. The latest innovation, the “low-cost” visioheadset or immersive headset, may have sparked off a mad buzz in the media, but the field of virtual reality has existed long before this! 2016 was not 1 V.R. (the first year of our science, Virtual Reality)! However, the considerable decrease in the price of visioheadsets has made it possible to open this technology up to large-scale use. The media and websites dedicated to virtual reality are most often run by non-specialists and are abound with indiscriminately proposed applications: some of these have existed for several years now, and others, while useful, would be inappropriate or even crazy. Virtual reality is not a magic wand. Let us remember that it is not sufficient to use an innovative technology for its own sake. This innovation must be made functional for the user, using new technological devices, whether a visioheadset or any other equipment.

Research and development in virtual reality has been undertaken for more than a quarter of a century by the VR community in France and in other parts of the world. It would be a great misfortune to be unaware of this work. However, if you are reading this now, then you have made the right choice! The fruit of all the research and professional developments in the field over the past decade is now presented in this volume. And who better than Bruno Arnaldi, Pascal Guitton and Guillaume Moreau to guide you through this arduous journey through the past 10 years in R&D in virtual development, as well as to give a glimpse of what the future may hold?

The three editors of this book are major actors in the field of virtual reality and augmented reality. All of them have participated in developing research in France, via the Groupe de Travail GT-RV (GT-VR Work Group) at CNRS (1994) and then through the Association Française de Réalité Virtuelle (The French Virtual Reality Association), which they established in 2005 as co-founders and in which they are very active members: President, Vice-President or members of the administrative council. This association has made it possible to efficiently structure the entire community (teachers, researchers, industrialists and solution providers). In parallel to this, thanks to their enthusiastic and indispensable support, I was able to organize and edit a collective work with contributions from more than a hundred authors, over five volumes: the Virtual Reality Treatise. There were three coordinators in this project. However, the third edition of this book is now 10 years old, and we needed a more recent publication to step into the breach.

It is essential to have a strong basic knowledge of virtual reality before plunging into the field, whether you are a student or an entrepreneur. The contents of this book, to which 30 authors have contributed, cover all the current problems and research questions, as well as the commercially available solutions: the immersion of a user, the user’s interfacing with the artificial space and the creation of this artificial space. All the technology and software available today are discussed here. The human factor is also taken into account, and there is a detailed description of methods of evaluation. There is also a section devoted to the risks associated with the use of visioheadsets.

A recent community that has come up in France, under the Think Tank UNI-VR, is bringing together professionals from the world of movies and audiovisual material. Using new 360° cameras, which enable the creation of artificial worlds made out of 360 images and not synthetic images, this group aims to create a new art, with two complementary approaches: one that produces “360 videos”, where the user remains a spectator, but with a bodily and proprioceptive immersion in the 360° video; the other designs “VR videos”, where the user becomes a “spect-actor”, as if they are able to interact with the story that unfolds the characters and the artificial environment, this being the authentic field of virtual reality. This artistic goal is close to that of “interactive digital arts”, even though these two communities do not know much about each other. Towards the end of the 1980s, French and international artists in the digital arts appropriated virtual reality to create interactive artistic creations, (“les pissenlits” (The Dandelions) by E. Couchot, M. Bret and M-H. Tramus, 1988; “L’autre” (The Other) by Catherine Ikam, 1991). A journalist from “Les Cahiers du Cinéma” once interviewed me, stating that “virtual reality is the future of the movies!” A strange remark, when we know of the antagonism between the movies (where the spectator is passive) and virtual reality (where the user is active, interacting with the artificial environment)! Another journalist was carried away by an innovation without bothering to learn about the fundamentals of this innovation and its impact on the individual! However, like all specialists, I did not imagine that 20 years later 360° would also enable the creation of an artificial world, where a user could be immersed in the heart of a film. By allowing the user to interact here, we enter into the field of virtual reality or augmented reality, by blending the real world and the artificial space. Unlike cinema, here there is no longer “a story to be told” but “a story to be lived”. With this book, readers have a source of detailed information that will allow them to successfully develop their own “VR videos”.

However, the digital modeling of an artificial world and its visual representation through synthetic images will remain the chief avenue for the development of the uses of virtual reality. For at least 15 years now, professional applications (e.g. industrial and architectural designs, training and learning, health) have made use of this. Different communities must collaborate more closely on theorizing this discipline and its techniques, which are exhaustively presented in this book by Bruno Arnaldi, Pascal Guitton and Guillaume Moreau. The merits of this book cannot be overstated – they must be bought!

Philippe FuchsJanuary 2018

Introduction

It can have escaped no one that 2016 and 2017 often features in the media as “The Time” for virtual reality and augmented reality. It is no less obvious that in the field of technology, many and regular breakthroughs are announced, each more impressive than the last. In the face of this media clamor, it is useful to step back and take a pragmatic look at some historical facts and information:

– The first of these is the fact (however difficult to accept) that virtual reality and augmented reality date back several decades and that there is a large international community working on these subjects. This work is being carried out both at the scientific level (research teams, discoveries, conferences, publication) and at the industrial level (companies, products, large-scale production). It is also useful to remember that many companies, technological or not, have been successfully using virtual reality and augmented reality technologies for many years now.

– Many of these technological announcements talk about the design of “new” virtual reality headsets (e.g. HTC Vive, Oculus Rift) and augmented reality headsets (e.g. HoloLens). But the fact is that the invention of the first “visioheadset”

1

dates back to almost 50 years, to Ivan Sutherland’s seminal work [SUT 68].

– Let us also note that these “visioheadsets” only represent a small part of the equipment used in virtual reality, whether for display (with projection systems, for example), motion-capture or interaction.

– The concept and applications of virtual reality are described in the series

Le traité de la réalité virtuelle

(The Virtual Reality Treatise), an encyclopedic volume produced collectively by many French authors (both academics and voices from the industry), the breadth and scope of which remains unmatched even today. The different editions of this are:

- the first edition in 2001 (Presses de l’Ecole des Mines), written by Philippe Fuchs, Guillaume Moreau and Jean-Paul Papin with 530 pages;

- the second edition in 2003 (Presses de l’Ecole des Mines), edited by Philippe Fuchs and Guillaume Moreau with help from 18 contributors, running to 930 pages in 2 volumes;

- the third edition in 2005 (Presses de l’Ecole des Mines), edited by Philippe Fuchs and Guillaume Moreau, with over 100 contributors, running to 2,200 pages in 5 volumes;

- an English version “Virtual Reality: Concepts and Technologies”, in 2011 (CRC Press), edited by Philippe Fuchs, Guillaume Moreau and Pascal Guitton with 432 pages.

– Finally, we must mention the creation of the “Association Française de Réalité Virtuelle” (AFRV) or the French Virtual Reality Association, established in 2005. The association has made it possible to structure the community better by bringing together teachers and researchers from universities and research institutions as well as engineers working within companies. From 2005 onward, the AFRV has been organizing an annual conference that sees presentations, activities and exchanges among participants.

As can be seen from this overview, there are already several communities at the international level as well as a wealth of literature on the subject and anyone who wishes to establish a scientific and/or technological culture will benefit from referring to publications such as [FUC 16] (in French) or [LAV 17, SCH 16], to mention a few.

I.1. The origins of virtual reality

When we talk about historic references relating to virtual reality, we may commence by discussing Plato’s Allegory of the Cave [PLA 07]. In Book VII of Plato’s Republic, there is a detailed description of the experiences of several men chained in a cave, who can only perceive shadows (thrown against the walls of the cave) of what happens in the outside world. The notion of reality and perception through what is and what is perceived becomes the subject of analysis, in particular concerning the passage from one world to another.

A few centuries later, in 1420, the Italian engineer Giovani Fontana wrote a book, Bellicorum instrumentorum liber [FON 20], in which he describes a magic lantern capable of projecting images onto the walls of a room (see Figure I.1(a)). He proposed that this could be used to project the images of fantastic creatures. This mechanism brings to mind the large immersion system (CAVE) developed a few centuries later by Carolina Cruz-Neira et al. [CRU 92] at the University of Illinois.

Figure I.1.a) Diagram of Giovani Fontana’s magic lantern, b) using the magic lantern. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

In books that recount the history of VR, we often come across the (legitimate) controversy around the first appearance of the term “virtual reality”. Some authors attribute it to Jaron Lanier, during a press conference in 1985, while others attribute it to Antonin Artaud, in his 1983 essay, Le théâtre et son double (published in English as “The Theatre and its Double”) [ART 09].

Artaud was unarguably the inventor of this term, which he used in his collection of essays on Theatre and, more specifically, in the chapter titled Le théâtre alchimique (“The Alchemical Theatre”). It must be noted that in this volume, Artaud talks at length about reality and virtuality (these words being frequently used in the text). The precise citation where the term “virtual reality” appears is on page 75 of the 1985 edition, collection Folio/essais de Gallimard:

“All true alchemists know that the alchemical symbol is a mirage as the theater is a mirage. And this perpetual allusion to the materials and the principle of the theater found in almost all alchemical books should be understood as the expression of an identity (of which alchemists are extremely aware) existing between the world in which the characters, objects, images and in a general way all that constitutes the virtual reality of the theater develop and the purely fictitious and illusory world in which the symbols of alchemy are evolved”.

Furthermore, a few pages earlier, he speaks about Plato’s Allegory of the Cave.

However, it is clear that Jaron Lanier was the first person to use this term in the sense that it is used in this book, when he used the English term virtual reality. It is also useful to remember that there is a subtle difference between the English term virtual and the French word virtuel (see Chapter 1, Volume 1 of the Virtual Reality Treatise, edition 3). In English, the word means “acting as” or “almost a particular thing or quality”. However, in French, the word indicates “potential”, what is “possible” and what “does not come to pass”. Linguistically speaking, the more appropriate French term would have been “réalité vicariante” – a reality that substitutes or replaces another.

Science-fiction writers, especially those writing in the “speculative fiction” genre (a genre which, as its name indicates, consists of imagining what our world could be like in the future) have also written books that integrate and/or imagine the VR-AR technologies we will discuss in this volume. The list of such books is quite long, and the four books presented here have been chosen simply for the impact they had. In chronological order, these are:

– Vernor Vinge, in his 1981 novella

True Names

, introduced a cyberspace (without explicitly naming it thus), where a group of computer pirates use virtual reality immersion technology to fight against the government. He is also the creator of the concept of “singularity”: that point in time when machines will be more intelligent than human beings;

– William Gibson, in his 1984 novel

Neuromancer

, described a world of networks where virtual reality consoles allow a user to live out experiences in virtual worlds. Gibson “invented” the term cyberspace, which he described as “a consensual hallucination experienced daily by billions of legitimate operators”. This concept of cyperspace spans different worlds: the digital world, the cybernetic world and the space in which we evolve;

– Neal Stephenson, in his 1992 novel

Snow Crash

, introduced the concept of the metaverse (a virtual, thus fictional, world in which a community, represented by avatars, is evolving); a universe like the one in the online virtual world

Second Life

;

– Ernest Cline, in his 2011 novel

Ready Player One

, offerred us a world where humanity lives in an enormous virtual social network to escape the slums in real life. This network also contains the key to riches, leading to a new kind of quest for the holy grail.

Literature is not the only field in which early references to virtual reality set up links between the real and the virtual. For example, we must mention the pioneering work of Morton Leonard Heilig in the world of cinema. Following a project he had worked on since the 1950s, he patented the Sensorama system in 1962. This system allowed users to virtually navigate an urban setting on a motorbike, in an immersive experience based on stereoscopic visualization, the sounds of the motorbike and by reproducing the vibration of the engine and the sensation of wind against rider’s face.

Cinema has made use of the emergence of new technologies quite naturally. In 1992, Brett Leonard directed The Lawnmower Man, starring Pierce Brosnan as a man who is the subject of scientific experiments based on virtual reality (see Figure I.2). Unsurprisingly, the story revolves around some of the undesirable effects. An interesting point about this film is that during shooting, actors used real equipment from the VPL Research company, set up by Jaron Lanier (who had already filed for bankruptcy by this time). Of course, no one can forget the 1999 film The Matrix, the first film in the Matrix trilogy, directed by Les Wachowski, starring Keanu Reeves and Laurence Fishburne. The plot is centered on frequent journeys between the real and the virtual worlds, the hero’s duty being to liberate humans from the rule of the machines by taking control of the matrix. The technology in this film is much more evolved as there is total immersion, and it is so credible that the user has a few clues to tell whether he is in the real or the virtual world. Another cult film, oriented more towards human–machine interaction (HMI) than VR itself, was Steven Spielberg’s 2002 film Minority Report, starring Tom Cruise (see Figure I.3). This film describes an innovative technology that allows a person to interact naturally with data (which would serve as inspiration for many future research projects in real labs). These three films are certainly not the only ones that talk about VR – a great many others could be named here; however, these three are iconic in this field.

Figure I.2.A still from the movie The Lawnmower Man

Figure I.3.A still from the movie Minority Report

After having discussed the mention of VR-AR in different fields of art, it is also interesting to analyze how this technology is used in these contexts. Cinema will become an intensive user of VR through the use of 360° cinema, for instance (and on the condition that the spectator finally becomes the spect-actor). In the artistic world, we have to work on the codes and rules for cinematographic writing that these new operational modes will bring about. In particular, in traditional cinema, the narration is constructed on the principle that the director, through their frames, will almost “lead the spectator by hand” to the point from which they want the spectator to view a particular scenic element. In a context where the spectator can freely create their own point of view, artistic construction does not remain the same. If we add to this the fact that the user has the ability to interact with their environment and therefore modify elements in the scene, the narrative complexity deepens and begins to approach the narrative mechanisms used in video games. Combining real and digital images (mixed reality) is another path for development and study, which will emerge soon.

The world of comic books/graphic novels is also influenced either through the development of immersion projects (e.g. Magnétique, by Studio Oniride in 2016; http://www.oniride.com/magnetique/) or through using VR in the world of a comic series as is the case with S.E.N.S, a project co-produced by Arte France and Red Corner studio in 2016, inspired by the work of Marc-Antoine Mathieu (see Figure I.4). Indeed, as the universe in VR experiences is not necessarily a reproduction of a real world, it could also be the fruit of pure fantasy and a comic book world lends itself readily to such experimentation.

Figure I.4.Projet S.E.N.S

I.2. Introduction to the basic concepts

This section aims to briefly describe the fields of VR and AR. We will review the principal concepts for each and provide some definitions2 in order to clearly define the scope of this book. Readers who seek more information on this are invited to consult the Virtual Reality Treatise [FUC 05].

I.2.1. Virtual reality

We will first and foremost remind ourselves that the objective of VR is to allow the user to virtually execute a task while believing that they are executing it in the real world. To generate this sensation, the technology must “deceive the brain” by providing it with information identical to the information the brain would perceive in the real environment.

Let us take an example that we will use for the rest of this section: you have always dreamed of flying a private aircraft without ever having acted on this desire. Well then, a VR system could help you to (virtually) realize this dream, by simulating the experience of flying the plane. To start with, it is essential that you are given synthetic images that reproduce the view from a cockpit, the runway first and then an aerial view of the territory you will fly over. In order to give you the impression of “being in the plane”, these images must be large and of good quality, so that the perception of your real environment is pushed to the background or even completely replaced by that of the virtual environment (VE). This phenomenon of modifying perception, called immersion, is the first fundamental principle of VR. VR headsets, which will be called visioheadsets in this book, offer a good immersion experience as the only visual information perceived is delivered through this device.

If the system also generates the sound of the aircraft engine, your immersion will be greater as your brain will perceive this information rather than the real sounds in your environment, which then reinforces the impression of being in an aircraft. In a manner similar to that of the visioheadset, an audio headset is used, as it can insulate against ambient noise.

A real pilot acts in the real environment by using a joystick and dials to steer the plane. It is absolutely indispensable that these actions be reproduced in the VR experience if we wish to simulate reality. Thus, the system must provide several buttons to control the behavior of the aircraft and a joystick to steer it. This interaction mechanism between the user and the system is the second fundamental principle of VR. It also serves to differentiate VR from applications that offer good immersion but no real interaction. For example, movie theaters can offer visual and auditory sensations of very high quality, but the spectator is offered absolutely no interaction with the story unfolding on the screen. The same observation can be made for “VR-videos”, which have recently become quite popular, but the only interaction offered is a change in point of view (360°). While this family of applications cannot be challenged, they do not qualify as VR experiences as the user is only a spectator and not an actor in the experience.

Let us return to our earlier example: in order to reproduce reality as closely as possible, we must be able to steer the aircraft using a force-feedback joystick, which will generate forces in order to simulate the resistance experienced when using a real joystick, which can be due to air resistance, for example. This haptic information significantly reinforces the user’s immersion in the VE. Moving further towards faithfully reproducing reality, let us imagine that we can provide a real aircraft cockpit fitted with real seats and control apparatus and that we can perfectly adapt the external screens so as to ensure that the synthetic images appear naturally in the windows and the windscreen of the aircraft. The impression is then even better as we give our brain additional visual impressions (the components of the cockpit), auditory information (the sound of the buttons being clicked or pressed) and haptic feedback (the feeling of being seated in the airplane seat). This type of a device will, undoubtedly, convince any brain that it is really seated in a cockpit, piloting an aircraft. And of course, these devices do exist in reality: these are the aircraft simulators that have been in use for many years, used first to train military pilots and then commercial pilots, and available today as entertainment devices for non-pilots who want to feel like they are flying a plane.

On the basis of this example, we can define VR as the capacity given to one (or more) user(s) to carry out a set of real tasks in a virtual environment, this simulation being based on the immersion of a user in this virtual environment through the use of interactive feedback from and interaction with the system.

Some remarks on this definition:

– “Real tasks”: in effect, even though the task is carried out in a VE, it is real. For example, you could start learning to fly a plane in a simulator (as real pilots actually do) because you are developing the skills that will then be used in a real aeroplane.

– “Feedback”: this is sensory information (e.g. visual, auditory, haptic) that the computer synthesizes using digital models, that is, descriptions of the form and appearance of an object, the intensity of a sound or of a force.

– “Interactive feedback”: these synthetic operations result from relatively complex software processing, and this therefore takes a certain amount of time. If this duration is too long, then our brain perceives the display of a fixed image, then another, destroying any sense of visual continuity and therefore of movement. It is consequently imperative that the feedback is interactive – imperceptible – to obtain a good immersion experience.

– “Interaction”: this term designates the functionalities offered to the user to act on the behavior of the system, by moving round, manipulating and/or displacing objects in VE; and in a symmetric manner, the information that is then delivered by the VE to the user, whether visual, auditory or haptic. Let us note that if there is no interaction, then we cannot refer to the experience as VR.

Generally speaking, why do we use VR? This technology was developed to achieve several objectives:

Design

: engineers have used VR for a long time, in order to improve the construction of a building or a vehicle, either for moving around within or around these objects or using them virtually in order to detect any design flaws there may be. These tests, which were once carried out using models of increasing complexity, up to a scale 1, were progressively replaced by VR experiences, which are less expensive and can be produced more quickly. It must be noted that these virtual design operations have been extended to contexts beyond tangible objects, for example, for movements (surgical, industrial, sports) or complex protocols.

Learning

: as we have seen in our example above, it is possible, today, to learn to pilot any kind of vehicle: plane, car (including F1 cars), ship, space shuttle or spaceship, etc. VR offers many advantages, the first and foremost being that of safety while learning. There is also an ease of replication and the possibility of intervening in the pedagogic scenario (simulating the breakdown of a vehicle or a weather event). Let us note that these learning operations have extended beyond steering vehicles to more complex processes such as the management of a factory or a nuclear center from a control room, or even learning to overcome phobias (of animals, empty spaces, crowds, etc.) using behavioral therapy that is based on VR.

Comprehension

: VR can offer learning supports through the interactive feedback it provides (especially visual), in order to better understand certain complex phenomena. This complexity can result from a difficulty or even an impossibility in accessing information on the subject as this information may no longer exist, may be difficult to access (underground or underwater, for oil prospecting; or it may be the surface of a planet that we wish to study), may be too voluminous for our brain to take in (big data) or may be imperceptible to the human senses (temperature, radioactivity). In many contexts, we seek this deeper understanding in order to enable better decision-making: where do we drill for oil? What financial action must we carry out? And so on.

To conclude, it is important to note that very precise and formal definitions for VR exist. For example, in Chapter 1 of Volume 1 (which presents the fundamental principles of the domain) of the  Virtual Reality Treatise [FUC 05], we find this definition: “virtual reality is a scientific and technical field that uses computer science and behavioral interfaces in order to simulate, in a virtual world, the behavior of 3D entities that interact with each other in real time and with one or more users immersed in a pseudo-natural manner through sensorimotor channels”.

I.2.2. Augmented reality

The goal of AR is to enrich the perception and knowledge of a real environment by adding digital information relating to this environment. This information is most often visual, sometimes auditory and is rarely haptic. In most AR applications, the user visualizes synthetic images through glasses, headsets, video projectors or even through mobile phones/tablets. The distinction between these devices is based on the superimposition of information onto natural vision that the first three types of devices offer, while the fourth only offers remote viewing, which leads certain authors to exclude it from the field of AR.

To illustrate this, let us use the example of a user who wishes to build a house. While they will only have blueprints, initially, AR will allow them to move around the plot, visualize the future building (by overlaying synthetic images onto their natural vision of the real environment) and perceive general volumes and the implantation in the landscape. As they move on to the process of construction, the user can compare several design and/or furnishing possibilities by visualizing painted walls or furniture arranged in different layouts in a structure that is still under construction. Going beyond interior design and furnishing, it is also possible for an electrician to visualize the placement of insulation and for a plumber to visualize the placement of pipes, even though these are to be hidden behind concrete screeds or concealed in a wall. In addition to placement, the electrician can also see the diameters used and thus the strength of the current being transported, and the plumber can visualize the color and thus the temperature of the water being supplied.

Why develop AR applications? There are several important reasons:

Driving assistance

: originally intended to help fighter jet pilots by displaying crucial information on the cockpit screen so that they would not need to look away from the sky to look at dials or displays (which can/could have been be crucial in combat), AR gradually opened up the option of assisted driving to other vehicles (civil aircraft, cars, bikes) including navigation information such as GPS.

Tourism

: by enhancing the capabilities of the audio-guides available to visitors of monuments and museums

3

, certain sites offer applications that combine images and sound.

Professional gesture assistance

: in order to guide certain professional users in their activities, AR can allow additional information to be overlaid onto their vision of the real environment. This information may not be visible in the real environment, as it is often “buried”. Thus, a surgeon may operate with greater certainty, by visualizing the blood vessels or anatomical structures that are invisible to them, or a worker participating in constructing an aeroplane may visually superimpose a drilling diagram directly onto the fuselage, without having to take measurements themselves, which leads them to gain speed, precision and reliability.

Games

: while it was popularized by

Pokémon Go

in 2016, AR made inroads into this field a long time ago, through the use of augmented versions of games such as

Morpion

,

PacMan

or

Quake

. It is clear that this sector will see a lot more development based on this technology, which will make it possible to combine the real environment and fictional adventures.

Even though they share algorithms and technologies, VR and AR can be clearly distinguished from each other. The main difference is that in VR the tasks executed remain virtual, whereas in AR they are real. For example, the virtual aircraft that you piloted never really took off and thus never produced CO2 in the real world, but the electrician using AR may cut through a gypsum board partition to install a real switch that can turn on or off a real light.

As regards AR, compact definitions have been proposed by many scientists. For example, in 1997, Ronald T. Azuma defined AR as a collection of applications that verify the following three properties [AZU 97]:

1) a combination of the real and the virtual;

2) real-time interaction;

3) integration of the real and the virtual (e.g. recalibration, obstruction, brightness).

I.3. The emergence of virtual reality

I.3.1. A brief history

Figure I.5.Evolution of the field of virtual reality. For a color version of this figure, see www.iste.co.uk/arnaldi/virtual.zip

Another analysis of the state of virtual reality today allows us to draw a timeline for the stages in the evolution of this field (see Figure I.5). The broad stages of evolution are:

before 1960 – the foundations

: numerous approaches and methods (used even today in virtual reality) were perfected well before the birth of “virtual reality” as a field. We have the first representations of reality through paintings (pre-historic), perspectives (Renaissance), panoramic displays (18th Century), stereoscopic vision and cinema (19th Century) and the British pilot training