Multimedia Security 2 - William Puech - E-Book

Multimedia Security 2 E-Book

William Puech

0,0
126,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Today, more than 80% of the data transmitted over networks and archived on our computers, tablets, cell phones or clouds is multimedia data – images, videos, audio, 3D data. The applications of this data range from video games to healthcare, and include computer-aided design, video surveillance and biometrics. It is becoming increasingly urgent to secure this data, not only during transmission and archiving, but also during its retrieval and use. Indeed, in today’s "all-digital" world, it is becoming ever-easier to copy data, view it unrightfully, steal it or falsify it.

Multimedia Security 2 analyzes issues relating to biometrics, protection, integrity and encryption of multimedia data. It also covers aspects such as crypto-compression of images and videos, homomorphic encryption, data hiding in the encrypted domain and secret sharing.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 489

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Foreword by Gildas Avoine

Foreword by Cédric Richard

Preface

1 Biometrics and Applications

1.1. Introduction

1.2. History of biometrics

1.3. The foundations of biometrics

1.4. Scientific issues

1.5. Conclusion

1.6. References

2 Protecting Documents Using Printed Anticopy Elements

2.1. Introduction

2.2. Document authentication approaches: an overview

2.3. Print test shapes

2.4. Copy-sensitive graphical codes

2.5. Conclusion

2.6. References

3 Verifying Document Integrity

3.1. Introduction

3.2. Fraudulent manipulation of document images

3.3. Degradation in printed and re-scanned documents

3.4. Active approaches: protection by extrinsic fingerprints

3.5. Passive approaches: detecting intrinsic characteristics

3.6. Conclusion

3.7. References

4 Image Crypto-Compression

4.1. Introduction

4.2. Preliminary notions

4.3. Image encryption

4.4. Different classes of crypto-compression for images

4.5. Recompressing crypto-compressed JPEG images

4.6. Conclusion

4.7. References

5 Crypto-Compression of Videos

5.1. Introduction

5.2. State of the art

5.3. Format-compliant selective encryption

5.4. Image and video quality

5.5. Perspectives and directions for future research

5.6. Conclusion

5.7. References

6 Processing Encrypted Multimedia Data Using Homomorphic Encryption

6.1. Context

6.2. Different classes of homomorphic encryption systems

6.3. From theory to practice

6.4. Proofs of concept and applications

6.5. Conclusion

6.6. Acknowledgments

6.7. References

7 Data Hiding in the Encrypted Domain

7.1. Introduction: processing multimedia data in the encrypted domain

7.2. Main aims

7.3. Classes and characteristics

7.4. Principal methods

7.5. Comparison and discussion

7.6. A high-capacity data hiding approach based on MSB prediction

7.7. Conclusion

7.8. References

8 Sharing Secret Images and 3D Objects

8.1. Introduction

8.2. Secret sharing

8.3. Secret image sharing

8.4. 3D object sharing

8.5. Applications for social media

8.6. Conclusion

8.7. References

List of Authors

Index

End User License Agreement

Guide

Cover

Table of Contents

Title Page

Copyright

Foreword by Gildas Avoine

Foreword by Cédric Richard

Preface

Begin Reading

List of Authors

Index

End User License Agreement

Pages

v

iii

iv

xi

xii

xiii

xiv

xv

xvi

xvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

SCIENCES

Image, Field Director – Laure Blanc-Feraud

Compression, Coding and Protection of Images and Videos, Subject Head – Christine Guillemot

Multimedia Security 2

Biometrics, Video Surveillance and Multimedia Encryption

Coordinated by

William Puech

First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2022

The rights of William Puech to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.

Library of Congress Control Number: 2022930820

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78945-027-9

ERC code:

PE6 Computer Science and Informatics

PE6_5 Cryptology, security, privacy, quantum cryptography

PE6_8 Computer graphics, computer vision, multi media, computer games

Foreword by Gildas Avoine

Gildas AVOINE

Director of the CNRS Computer Security Research Network, INSA Rennes, Univ Rennes, IRISA, CNRS, France

French academic and industrial research in cybersecurity is at the forefront of the international scene. While France cannot claim to have sovereignty over cybersecurity technologies, it undeniably possesses a wealth of skills, as French expertise covers all areas of cybersecurity.

Research in cryptography illustrates French excellence, but it should not overshadow other domains where French influence is just as remarkable, including formal methods for security, protection of privacy, security of systems, software and networks, security of hardware systems and multimedia data security, according to the classification proposed by the CNRS Computer Security Research Network (GdR).

The security of multimedia data is covered in this book. The evolution of our society from the written word to sound and image, with the notable arrival of the mobile phone and the democratization of the Internet has brought about new security needs. These are only the beginning of the transformation of our society, and the recent deployment of videoconferencing shows that research into the security of multimedia data is constantly confronted with new scientific challenges.

The complexity of the subject and its multidisciplinary dimension, which primarily combines signal processing and cryptography, are perfectly illustrated by the variety of subjects detailed throughout this book. The chapters thus reveal the scientific obstacles to be dealt with by the community, by anchoring them in real scenarios, such as the fraudulent copying of films, the deception of artificial intelligence or the spreading of doctored images on social media.

This book, made up of two volumes, is thus promised to become a reference in the field of multimedia data security, an introduction that is both exhaustive and in-depth that students, engineers and researchers will be able to appreciate through more than 600 pages enriched with numerous references. Everyone can indulge in their favorite kind of reading, whether linear or random.

Finally, I would like to thank all of the authors for their commitment to supporting the scientific community, and I would particularly like to thank William Puech for editing this edition of the book. William, alongside Patrick Bas and then Caroline Fontaine, is responsible for the theme of multimedia data security within the Computer Security GdR, thus allowing the entire cybersecurity community to better understand this fascinating subject.

Happy reading!

Foreword by Cédric Richard

Cédric RICHARD

Director of the CNRS GdR ISIS, Côte d’Azur Observatory, University of Côte d’Azur, Nice, France

With the relentless increase in bandwidth and storage space, as well as the proliferation of mobile devices and the development of new standards, multimedia data is affecting our societies by changing the way that we access data and information. It is also changing our relationship to culture, by transforming interactions between individuals and their relationships with organizations. Multimedia activities are present in all major sectors of activity (security, health, telecommunications, etc.) and have supported their successive developments because of the common backbone they build, from information support to the application and user.

In this context, by protecting confidentiality and copyright, verifying integrity, analyzing and authenticating content, tracing copies and controlling access, particularly critical questions about multimedia data security are being asked. For example, the protection strategies implemented must take into account the specific needs of multimedia while meeting the requirements of the means of communication, thus establishing a compromise. A wrong approach can indeed lead to excessive coding of the data, or the alteration of their perceptual quality, and thus failure in the targeted security objectives.

As an interface discipline, the art of multimedia security is difficult!

However, with this two-part book, William Puech and his co-authors take up the challenge brilliantly by painting an exhaustive and current panorama of multimedia security. They offer an in-depth analysis of authentication and hidden data embedding methods, biometric technologies and multimedia protection and encryption processes. Without giving in to an outdated formalism that could hinder the fluidity of their presentations, the authors captivate the reader by presenting the state of the art of each subject directly and in an illustrative way.

William Puech and the contributors to this book have provided considerable work to their French-speaking scientific communities of information, signal, image, vision and computer security, represented by the two appropriate French GdR groups of the CNRS. I would like to express all of my gratitude to them.

Preface

William PUECH

LIRMM, Université de Montpellier, CNRS, France

Nowadays, more than 80% of transmitted data on social media and archived in our computers, tablets, mobile phones or in the cloud is multimedia data. This multimedia data mainly includes images (photographs, computer-generated images), videos (films, animations) or sound (music, podcasts), but equally more and more three-dimensional (3D) data and scenes, for applications ranging from video games to medical data, passing through computer-aided design, video surveillance and biometrics. It is becoming necessary, urgent, not to say vital, to secure this multimedia data during its transmission or archiving, but also during its visualization. In fact, with everything digital, it is becoming increasingly easy to copy this multimedia data, to view it without rights, to appropriate it, but also to counterfeit it.

Over the last 30 years, we have observed an expansive development around multimedia security, both internationally and in France. In fact, at the French level, there are dozens of research teams in laboratories, but also a large number of industrials, who are focusing their activities on these aspects. This activity can also be found in several GdR (research groups) of the CNRS, but in particular the GdR ISIS (information, signal, image and vision) and the GdR computer security.

Multimedia security is a relatively new theme, as evidenced by the publication dates of the articles referenced in the various chapters of these two volumes. In fact, out of about 900 references, nearly 50% of them are less than 10 years old, and more than 35% are between 10 and 20 years old. Of course, let us not forget certain authors, such as Auguste Kerckhoffs (1835–1903) and Claude Shannon (1916–2001), without whom our community would not have advanced in the same way. The history of multimedia security really begins at the end of the 1990s, with the beginning of watermarking, steganography, but in a very timid manner, this being motivated by the digitization of content and the protection of rights holders. In 2001, motivated by the attack of September 11, research in steganalysis hidden signal detection and statistical detection became the top priority. Between 2000 and 2010, there was an international explosion in watermarking security. There were also major contributions in steganography and steganalysis. During this same decade, research into securing multimedia data by specific encryption was born with the aspects of selective or partial encryption and crypto-compression, while guaranteeing the preservation of international formats and standards. From 2010, new facets of multimedia data security have emerged with forensics aspects, as well as statistical approaches. There has also been a strong development in signal processing in the encrypted domain, as well as the tracing of traitors. In 2020, research in forensics and steganalysis has been gaining momentum, in particular with the emergence of machine learning, and especially with the exploitation and development of deep convolutional neural networks. The recent advances in this field have varied greatly, from steganography (GAN), adversarial methods, methods by content generation, to the processing of encrypted content, including the links between learning and information leakage, applications in biometrics and “real-life” content analysis.

This project of works began more than two years ago and has really meant a lot to me. In fact, at the French level, we have a certain strength in this field, and numerous gems that we have brought to light. Nothing could have been achieved without the support of the GdR ISIS and GdR computer security. It is largely because of these GdR that we have succeeded in tracking research activities in the field of multimedia security from a French point of view. The towns represented in these two works illustrate the richness and national diversity (Caen, Grenoble, La Rochelle, Lille, Limoges, Lyon, Montpellier, Paris, Poitiers, Rennes, Saint-Étienne and Troyes), because some of these cities, as we will see during our reading, are represented by several laboratories and/or universities.

As we will be able to see throughout these two volumes, even if they are grouped around multimedia security, the research themes are very broad and the applications varied. In addition, the fields cover a broad spectrum, from signal processing to cryptography, including image processing, information theory, encoding and compression. Many of the topics in multimedia security are a game of cat and mouse, where the defender of rights must regularly transform into a counter-attacker in order to resist the attacker.

The first volume primarily focuses around the authentication of multimedia data, codes and the embedding of hidden data, from the side of the defender as well as the attacker. Concerning the embedding of hidden data, it also addresses the aspects of invisibility, color, tracing and 3D data, as well as the detection of hidden messages in images by steganalysis. The second volume mainly focuses on the biometrics, protection, integrity and encryption of multimedia data. It covers aspects such as image and video crypto-compression, homomorphic encryption, the embedding of hidden data in the encrypted domain, as well as the sharing of secrets. I invite the reader, whether they are a student, teacher, researcher or industrial to immerse themselves in these works, not necessarily by following the intended order, but going from one chapter to another, as well as from one volume to another.

These two volumes, even though they cover a broad spectrum in multimedia security, are not meant to be exhaustive. I think, and hope, that a third volume will complete these first two. In fact, I am thinking of sound (music and speech), video surveillance/video protection, camera authentication, privacy protection, as well as the attacks and counter-attacks that we see every day.

I would like to thank all of the authors, chapter managers, their co-authors, their collaborators and their teams for all of their hard work. I am very sorry that I have had to ask them many times to find the best compromises between timing, content and length of the chapters. Thank you to Jean-Michel, Laurent, Philippe (×2), Patrick (×2) Teddy, Sébastien (×2), Christophe, Iuliia, Petra, Vincent, Wassim, Caroline and Pauline! Thank you all for your openness and good humor! I would also thank the GdR ISIS and computer Security through Gildas and Cédric, but also Christine and Laure for their proofreading, as well as for establishing a connection with ISTE Ltd. I would also like to thank all of the close collaborators with whom I have worked for more than 25 years on the various themes that I have had the chance to address. PhD students, engineers, interns and colleagues, all of them will recognize themselves, whether they are in my research team (ICAR team) or in my research laboratory (LIRMM, Université de Montpellier, CNRS).

In particular, I would like to thank Vincent, Iuliia, Sébastien and Pauline for having accepted to embark on this adventure. Pauline, in addition to writing certain chapters, has been a tremendous collaborator for the advancement of this book. All of those responsible for the chapters have seen that, Pauline has been my shadow over the past two years, to ensure that these two works could see the light of day in 2021. Thank you Pauline! To conclude, I would like to warmly thank all of the members of my family, and in particular Magali and our three children, Carla, Loriane and Julian, whom I love very much and who have constantly supported me.

November 2021

1Biometrics and Applications

Christophe CHARRIER1, Christophe ROSENBERGER1 and Amine NAIT-ALI2

1GREYC, Normandy University, University of Caen, ENSICAEN, CNRS, France

2LISSI, University of Paris-Est Créteil Val de Marne, France

Biometrics is a technology that is now common in our daily lives. It is notably used to secure access to smartphones or computers. This chapter aims to provide readers with an overview of this technology, its history and the solutions provided by research on societal and scientific issues.

1.1. Introduction

There are three generic ways to verify or determine an individual’s identity: (1) what we know (PIN, password, etc.); (2) what we have (badge, smart card, etc.); and (3) what we are (fingerprint, face, etc.) or what we know how to do (keystroke dynamics, gait, etc.). Biometrics is concerned with this last set of approaches. Biometrics, and more precisely security biometrics, consists of verifying or identifying the identity of an individual based on their morphological characteristics (such as fingerprints), behavioral characteristics (such as voice) or biological characteristics (such as DNA).

The biometric features by which an individual’s identity can be verified are called biometric modalities. Examples of some biometric modalities are shown in Figure 1.1. These modalities are based on the analysis of individual data, and are generally grouped into three categories: biological, behavioral and morphological biometrics. Biological biometrics is based on the analysis of biological data related to the individual (saliva, DNA, etc.). Behavioral biometrics concerns the analysis of an individual’s behavior (gait, keyboard dynamics, etc.). Morphological biometrics relates to particular physical traits that are permanent and unique to any individual (fingerprints, face, etc.).

Figure 1.1.Examples of biometric modalities used to verify or determine the identity of an individual

Nowadays, the use of facial or fingerprint recognition has come to feel natural to many people, notably among the younger generations. Biometric technology is part of our everyday lives (used for border control, smartphones, e-payment, etc.). Figure 1.2 shows the spectacular evolution and market prospects of this technology. In an increasingly digital world, biometrics can be used to verify the identity of an individual using a digital service (social network or e-commerce). While fingerprints and facial or iris recognition are among the most well-known biometric modalities (notably due to their use in television series or movies), a very wide range of biometric data can be captured from an individual’s body or from digital traces. An individual can be recognized in the physical and digital worlds using information from both spheres.

The use of this technology raises a number of questions: how new is this technology? How does a biometric system work? What are the main areas of current and future research? These questions will be addressed in the three main sections of this chapter: the history of biometrics (section 1.2), the technological foundations of biometrics (section 1.3) and the scientific issues and perspectives (section 1.4).

Figure 1.2.Evolution and perspectives of the biometrics market (source: Biometric System Market, October 2019)

1.2. History of biometrics

Biometrics may be as old as humanity itself. In essence, biometrics relates to a measurement that can be performed on living things, and in a security context, it refers to the recognition of individuals by their physical and/or behavioral characteristics. This property of recognition is primarily human based, and not dependent on technology. As humans, we recognize one another through aspects such as facial features, hands or gait; the human brain has the capacity to distinguish, compare and, consequently, recognize individuals. In reality, biometrics – as we now understand it – is simply a technological replication of what the human brain can do. Key aims include speed, reproducibility, precision and memorization of information for populations of theoretically infinite size (Nait-Ali and Fournier 2012).

From the literature, we find that biometrics began to be conceptualized several centuries BC, notably in the Babylonian civilization, where clay tablets used for trading purposes have been found to contain fingerprints. Similarly, fingerprinted seals appear to have been used in ancient China and ancient Egypt. It was not until the 14th century, however, that a Persian book, entitled Jaamehol-Tawarikh, mentioned the use of fingerprints for individual identification. Other later publications concerning the fingerprint and its characteristics include the work of G. Nehemiah (1684), M. Malpighi (1686), and a book published in 1788, in which the anatomist J. Mayer highlighted the unique nature of papillary traces.

It was only during the industrial revolution, notably in the mid-19th century, that the ability to clearly identify individuals became crucial, particularly due to an intensification of population mobility as a result of the development of commercial exchanges. The first true identification procedures were established in 1858, when William Herschel (working for the Indian Civil Service at the time) first used and included palm prints, then fingerprints, in the administrative files of employees (see Figure 1.3). Later, several medical scientists, anthropologists and statisticians, including Henry Faulds, Francis Galton and Juan Vucetich, developed their own studies of fingerprints. Vucetich was even responsible for the first instance of criminal identification using this technique, which took place in Argentina in 1892 (the Francisca Rojas case).

Figure 1.3.a) William James Herschel (1833–1917), and b) example of palm and finger prints (source: public domain)

A further turning point in biometrics occurred in the 1870s when Alphonse Bertillon, a French police officer, began to implement anthropometric techniques which came to be known as the Bertillon System, or “bertillonnage”. Broadly speaking, this involved taking multiple measurements of the human body, including the face and hands. By combining these measurements with a photograph of the person and other physical descriptions (see Figure 1.4), Bertillon developed files which could be used to identify criminals and delinquents, even if they were disguised or using a false identity (see Figure 1.5). The first criminal identification using this technique in France occurred in 1902: Henri Léon Scheffer was identified by matching fingerprints taken from a crime scene with the information on his anthropological documents. At this time, the Bertillon system was used to a greater or lesser extent in many countries around the world.

Some 30 years later (1936), an ophthalmologist, Frank Burch, introduced the concept of identifying individuals by iris characteristics, although Burch did not develop this idea into an identification system. Biometrics as we now understand it began to take shape in the 1960s, drawing on technological advances in electronics, computing and data processing. The first semi-automatic facial recognition system was developed by the American Woodrow W. Bledsoe (Bledsoe and Chan 1965). The system consists of manually taking the coordinates of the characteristic points of the face from a photograph. These coordinates are then stored in a database and processed by computer by calculating distances with respect to reference points. In the same year, the first model of the acoustic speech signal was proposed by Gunnar Fan, in Sweden, laying the foundations for speech recognition. The first automatic biometric systems began to appear in the 1970s. Notable examples include a system for recognizing individuals by hand shape (1974), a system for extracting minutiae from fingerprints (FBI, 1975), a facial recognition system (Texas Instruments, 1976), a patent for a system for extracting signature characteristics for individual verification (1977), a patent for an individual verification system using 3D features of the hand (David Sidlauskas, 1985), a patent for the concept of recognizing individuals by the vascular network features at the back of the eye (Joseph Rice, 1995) and a patent for the concept of identifying individuals by characteristics of the iris (Leonard Flom and Aran Safir, 1986); the algorithm for this final system was later patented by John Daugman in 1994.

Figure 1.4.Plate taken from the Identification Anthropométrique journal (1893). a) Criminal types. b) Anthropometric file

Figure 1.5.Example of an anthropometric file using the Bertillon system (source: public domain)

The 1980s–1990s also saw an upsurge in activity with respect to facial recognition, notably with the application of principal component analysis (PCA) techniques by Kirby and Sirovich in 1988 (Kirby and Sirovich 1990), then the introduction of Eigenfaces by Turk and Pentland (1991). Turk and Pentland’s paper was well received by the biometrics community, and has been cited over 18,500 times at the time of writing (2020). The authors demonstrated facial recognition using a limited number of parameters (compared to the number of pixels in a digital image), permitting the use of real-time applications. The performance of this method was quickly surpassed in the 2000s by a wide range of new data-processing approaches, and thanks to developments in computer science and electronics, an accelerating factor in the design of biometric systems. Following on from early uses for security projects, including industrial, military and governmental applications, biometrics has gradually gained ground in the field of commercial products and services. For example, fingerprint authentication (e.g. Touch-ID) was first integrated into smartphones in 2013, followed by facial recognition (e.g. Face-ID) in 2017. Research and development in this area is currently booming, and biometrics research, applications and modalities continue to expand at a rapid pace. The socioeconomic implications of the technology are likely to prove decisive in the coming decades; the story of biometrics is far from over.

1.3. The foundations of biometrics

In this section, we shall present key foundational elements involved in biometrics and highlight the scientific issues at play in this domain.

1.3.1. Uses of biometrics

Before going into detail concerning the operation of biometrics, it is interesting to consider its applications. The first objective of biometrics is identity verification, that is, to provide proof to corroborate an assertion of the type “I am Mr X”. A facial photograph or fingerprint acts in a similar way to a password; the system compares the image with a pre-recorded reference to ensure that the user is who they claim to be. The second application of biometrics concerns the identification of individuals in cases where their collaboration is not generally required (e.g. facial recognition based on video surveillance footage). Finally, biometrics is often used to secure access to places or tools (premises, smartphones and computers), for border control (automated border crossing systems), by police services (identity control) or for payment security (notably on smartphones), as shown in Figure 1.6.

Figure 1.6.Some applications of biometrics (physical access control, social networks)

1.3.2. Definitions

In order to recognize or identify an individual k, reference information Rk must be collected for the individual during an initial enrollment phase. During the authentication/identification phase, a new sample is captured, denoted as E. A biometric system will compare sample E to Rk in an attempt to authenticate an individual k, or to multiple references in a biometric database in cases of identification. A decision is then made (is this the right person?) by comparing the comparison score (in this case, taken as a distance) to a pre-defined threshold T:

The threshold T is defined by the application. In the case of distance, the lower the threshold, the stricter the system is, because it requires a small distance between the sample and the individual’s reference as proof of identity. A strict (high security) threshold will result in false rejections of legitimate users (measured by the FRR, false rejection rate). A looser threshold will result in an increased possibility of imposture (measured by the FAR, false acceptance rate). To set the threshold T for a given application, we consider the maximum permissible FAR for the system; the FRR results from this choice. As an example, consider a high security setting with an acceptable FAR rate of one in a million attempts. In this context, we expect an FRR of less than 2%. The equal error rate (EER) is the error obtained when the threshold is set so that the FRR is equal to the FAR. The EER is often used as an indicator of the performance of a biometric system, although using the associated threshold to parameterize a system is not of any particular practical use; it is simply easier to understand the performance of a system on the basis of a single EER value.

1.3.3. Biometric modalities

There are three main groups of biometric modalities (types of biometric information): morphology (part of the person’s body, such as the face or the iris), behavior (an individual action, such as the voice or the way of signing) and physiology (such as DNA). The first two modalities are the most widespread in transactional contexts due to processing time limitations. These three categories of biometric modalities are illustrated below, represented by DNA, signature dynamics and fingerprints.

Figure 1.7.Illustrations of the three categories of biometric modalities: DNA, signature dynamics and fingerprints

Almost any morphological or behavioral characteristic may be considered as a biometric characteristic, as long as it satisfies the following properties (Prabhakar et al. 2003):

– universality: all people to be identified must possess the characteristic;

– uniqueness: the information should be as different as possible from one person to the next;

– permanence: the collected information must remain present throughout the individual’s lifetime;

– collectability: it must be possible to collect and measure the information in order to permit comparison;

– acceptability: the system must respect certain criteria (ease of acquisition, rapidity, etc.) in order to permit use.

Table 1.1.Comparison of biometric modalities based on the following properties: (U) universality, (N) uniqueness, (P) permanence, (C) collectability, (A) acceptability and (E) performance. For performance, the number of stars is linked to the value of the equal error rate (EER) obtained in the state of the art source: Mahier et al. (2008)

CriterionModality

U

N

P

C

A

E

DNA

Yes

Yes

Yes

Low

Low

*****

Blood

Yes

No

Yes

Low

No

*

Gait

Yes

No

Low

Yes

Yes

***

Typing dynamics

Yes

Yes

Low

Yes

Yes

****

Voice

Yes

Yes

Faible

Yes

Yes

****

Iris

Yes

Yes

Yes

Yes

Low

*****

Retina

Yes

Yes

Yes

Yes

Low

*****

Face

Yes

No

Low

Yes

Yes

****

Hand geometry

Yes

No

Yes

Yes

Yes

****

Veins on hand

Yes

Yes

Yes

Yes

Yes

*****

Ear

Yes

Yes

Yes

Yes

Yes

*****

Fingerprint

Yes

Yes

Yes

Yes

Yes

****

Not all biometric features have these properties, or they may have them, but to different degrees. Table 1.1, taken from Mahier et al. (2008), compares the main biometric modalities according to the properties listed above. As we see from this table, no characteristic is ideal; different modalities may be more or less suitable to particular applications. For example, DNA-based analysis is one of the most effective techniques for verifying an individual’s identity or for identification (Stolovitzky et al. 2002). However, it cannot be used for logical or physical access control, both due to the computation time and the fact that nobody would be willing to provide a sample of their blood for verification purposes. The choice of modality is thus based on a compromise between some or all of these properties according to the needs of each application. Note that the choice of the biometric modality may also depend on local cultures. In Asia, methods requiring physical contact, such as fingerprints, are not widely accepted for hygiene reasons; contactless methods are more widespread, and more readily accepted, in this setting.

1.4. Scientific issues

Biometrics is a rapidly evolving field as new operational applications emerge in our daily lives (e.g. unlocking smartphones via facial recognition). Several scientific issues relating to biometrics, resulting from the new needs of this technology, are discussed below.

1.4.1. Presentation attacks

There are many ways of attacking a biometric system (Ratha et al. 2001). An attacker may alter the storage of biometric credentials (e.g. replace a user’s biometric credentials in order to spoof the system), or replace a sub-module, such as the decision module, so that it returns a positive response to any attempt. In this section, we shall focus on presentation attacks, which consist of presenting the capture subsystem with biometric data intended to alter the operation of the biometric system. This type of attack can be quite easy to perform, for example by presenting a photo of the user’s face printed on paper. Impostors may also present biometric systems with falsified biometric data (e.g. a gelatin fingerprint), with or without the participation of the individual concerned. One particularly active area of research concerns the development of hardware or software mechanisms to detect this type of attack (Galbally et al. 2019).

The most common attack of this type is carried out on facial recognition systems. Facial recognition technology has come on in leaps and bounds since its invention in the 1970s, and is now the most “natural” of all biometric measures. By the same token, it has become a major focus for hackers. For example, Grigory Bakunov has developed a solution that can confuse facial recognition devices, by designing an algorithm that creates specific makeup arrangements to fool facial recognition software (see Figure 1.8(a)).

In late 2017, a Vietnamese company successfully bypassed the Face ID facial recognition feature of Apple’s iPhone X using a mask (see Figure 1.8(b)).

At the same time, researchers at a German company developed an attack technique to bypass Windows 10 Hello facial authentication. A key element of the attack appears to be taking a picture of the authenticated user with a near-infrared (IR) camera, since Windows Hello uses infrared imaging to unlock Windows devices (see Figure 1.8(c)).

In May 2018, Forbes magazine reported that researchers at the University of Toronto (Canada) had developed an algorithm (privacy filter) that confuses facial recognition software. The software changes the value of specific pixels in the image posted online. These changes, imperceptible to the human visual system (HVS), confuse the recognition algorithms.

Figure 1.8.Examples of techniques used to hack facial recognition systems

One response to these types of attack is to use video rather than still images (Matta and Dugelay 2009). Some operators use interviews, via video conferencing software, to authenticate a person. Unfortunately, new attacks have already been developed for video authentication, and we can expect these attacks to become more sophisticated in the years to come. Video streams can now be manipulated in real time to show the facial reactivity of a counterfeiter on top of another person’s face (Thies et al. 2016), or through face swapping (Bitouk et al. 2008).

Numerous works have been published on this subject, mostly by researchers in the Image Forensics community (Redi et al. 2011; Yeap et al. 2018; Roy et al. 2020); the main approach involves looking for abnormalities in images or flows to identify locations where manipulations have occurred. Modifications are detected on the basis of inconsistencies or estimated abnormalities on image points, inconsistencies in sensor noise, recompressions, internal or external recopies and inconsistencies in terms of illumination or contours. Several technological challenges have been launched by DARPA, IEEE and NIST in the United States and by the DGA in France (including a DEFALS with participation from EURECO, UTT and SURYS) to measure the effectiveness of this type of method. It should be noted that significant progress has recently been made because of deep learning techniques. Passive detection can also draw on knowledge of the particularities of attacks, such as what is known to happen during morphing between two images (Raghavendra et al. 2016), or on the history of operations on the images in question (Ramachandra and Busch 2017).

However, the effectiveness of these countermeasures is beginning to be undermined by advances in inpainting technologies using deep learning, which create highly credible computer-generated images, produced in real time, using just a few photos of the person whose identity is being spoofed and a video stream of the spoofer responding (potentially) to all of the requests of the preceding tests. However, face spoofing can be detected in video streams by focusing on known features of the processed images, such as specific 3D characteristics of a face (Galbally et al. 2014). Evidently, more work is urgently needed in this area.

1.4.2. Acquisition of new biometric data or hidden biometrics

The objective here is to collect known biometric data by new capture methods (3D, multi-spectral (Venkatesh et al. 2019) and motion (Buriro et al. 2019)), or capture new biometric information (for example, the electrical signal from an individual’s body (Khorshid et al. 2020)). The goal is to propose new information which offers improved individual recognition, or which has a greater capacity to detect presentation attacks.

Elsewhere, considerable efforts have been made in recent years in exploring a specific form of biometrics, known as hidden biometrics. The principle consists of identifying or verifying people on the basis of physical characteristics, which are not accessible by traditional techniques, or which are not directly observable or perceivable by humans. This property makes systems particularly robust to attacks.

Hidden biometrics also concerns features that vary over time, that cannot be quantified at a given moment, and which can only be predicted (e.g. variations resulting from aging) or recovered (e.g. by rejuvenation). In this case, we speak of forward or backward prediction.

Certain modalities used in hidden biometrics rely on technologies developed in the fields of medicine or forensic science, particularly for data acquisition. Examples include the use of electrocardiograms (ECG), electroencephalograms (EEG) or electromyographs (EMG), involving a variety of imaging techniques (infrared, thermal, ultrasound, etc.) (Nait-Ali 2019a, 2019b).

In this section, we shall focus on three modalities used in hidden biometrics, namely human brain biometrics, hand biometrics and digital facial aging/rejuvenation.

In 2011, researchers showed that a biological signature, the “Braincode”, can be obtained from the human brain and used to distinguish between individuals. Both 2D and 3D processing approaches have been explored, using images obtained by magnetic resonance imaging (MRI). In the 2D approach, one idea is to extract biometric features from a single specific axial slice, as shown in Figure 1.9. Defining a region of interest (ROI) in the form of a crown, using an algorithm similar to the one used in iris biometrics, a recognition rate of around 98.25% can be achieved. In the 3D approach, the whole volume of the image obtained via the MRI scan is used in order to obtain a Braincode. The idea is to explore the whole volume image obtained by MRI to extract the Braincode. In an article published in Aloui et al. (2018), the envelope of the brain was estimated, highlighting the structure of convolutions, as shown in Figure 1.10.

Figure 1.9.Brain biometry via MRI. a) Determination of a region of interest (ROI) from an axial slice. b) Extraction of “brainprint” characteristics using a similar approach to iris biometrics

While this modality cannot currently be used for practical applications, notably due to its technical complexity, cost and low level of user acceptability, future uses are not to be excluded.

Figure 1.10.Hidden brain biometrics: extraction of a brainprint from MRI images of the brain. a) Curvilinear envelopes, estimated using one brain at three different depths (10 voxels-1 cm). b) 2D projection of the estimated envelopes

Palm biometrics in the visible or infrared range (vein biometrics) are potentially vulnerable to attack. One reason for this relates to the superficiality of features extracted from the region of interest.

Technically, this risk can be considerably reduced by using a modality based on X-ray imaging. In this context, experiments have been carried out on many samples; researchers have shown that a biometric signature can be extracted by modeling the phalanges of the hand (see Figure 1.11 (Kabbara et al. 2013, 2015; Nait-Ali 2019a)).

In the algorithm in question, the image is segmented in order to highlight all of the phalanges. Each phalanx is then modeled using a number of parameters, which are then concatenated to create a biometric signature. Evidently, this approach raises questions concerning the impact of X-rays on user health. The study in question took the recommendations of the National Council on Radiation Protection and Measurements (NCRP) into account, limiting the radiation dose of the systems to 0.1 μSv/scan to ensure user safety.

1.4.3. Quality of biometric data

The quality of biometric data is not always easy to estimate. While quality metrics have been established for morphological modalities such as fingerprints (Yao et al. 2016b), much work is still needed in the case of behavioral modalities.

Work carried out in recent years has highlighted the importance of sample quality for recognition systems or comparison algorithms. The performance of a biometric system depends, to a great extent, on the quality of the sample image. Over the last decade, many research works have focused on defining biometric data quality metrics for the face (Nasrollahi and Moeslund 2008; Wasnik et al. 2017), vein networks (Qin and El Yacoubi 2017) and, especially, fingerprints (Tabassi et al. 2011; Yao et al. 2015a; Liu et al. 2016).

Figure 1.11.Hidden palmar biometrics. a) Imaging in the visible domain. b) X-ray imaging is more robust against attacks. Once the phalanges have been modeled, the biometric signature can be extracted

The development of a quality measurement for biometric data centers on an objective demonstration of the superiority of one indicator over others. In the case of image quality, the aim is to develop an algorithm that assigns quality ratings that correlate perfectly with human judgment; in biometrics, a quality measurement must combine elements of image quality with elements relating to the quality of the extracted biometric characteristics, ensuring that a system will perform well. In this case, the working framework is different and the real-world situation is not fully known, and this can prove problematic.

Yao et al. (2015a) have proposed a methodology for quantifying the performance of a quality metric for biometric data. Their approach is generic, and can be applied to any modality. The method estimates the proximity of a metric to an optimal judgment.

1.4.3.1. Relevance of a quality metric

The principle of the proposed method consists of evaluating the relevance of a metric for a user enrollment task using a database of biometric samples from several users. In this case, a heuristic is needed to designate the user’s reference sample. Once this choice has been made, all legitimate scores in the database are calculated by comparing the samples with the reference of each user. The same is done for imposture scores, by comparing a reference with the samples of all other individuals in the database. These scores are used to compute the FRR and FAR for different values of the decision threshold. These values are then used to calculate the DET curve (evolution of the quantity of false rejections as a function of false acceptances), the EER and the area under the DET curve (AUC).

Two co-existing strategies may be used to choose the reference sample for a user:

1)

choice of the first sample

as the point of reference for an individual. This approach is widespread, and is considered as the default option (see

Figure 1.12(a)

);

2)

choice of a reference based on a heuristic

(see

Figure 1.12(b)

).

The heuristic may be based on a measurement of sample quality. In this case, the sample with the highest quality is selected as the reference sample for the user.

Another option is to use a heuristic based on the minimum AUC value. This comes down to determining the optimal choice of a reference sample with respect to the performance of the biometric system in question (lowest AUC).

A further alternative is to choose the sample which results in the highest value of the AUC.

Figure 1.13 shows the performances obtained on a biometric database for different reference choice heuristics. The DET curve using the worst sample as a reference is shown in black, and the DET curve using the best sample is shown in green. We see that the choice of reference results in system performances with an AUC of between 0.0352 and 0.2338. Using two metrics, we obtain performances of 0.0991 (codecolorBlue) and 0.0788 (red). Metric 1 (codecolorBlue curve) is considered less efficient than metric 2 (red curve). This demonstrates the possibility for improvement in sample quality measurements.

1.4.3.2. Metric behavior

Twelve biometric databases from the FVC competition (Maltoni et al. 2009) were used to study the behavior of metrics: FVC 2000 (DB1, DB2, DB3, DB4), FVC 2002 (DB1, DB2, DB3, DB4) and FVC 2004 (DB1, DB2, DB3, DB4). Five additional synthetic fingerprint databases of different qualities were also generated using SFINGE (Cappelli et al. 2004): SFINGE0 (containing fingerprints of varying quality), SFINGEA (excellent quality), SFINGEB (good quality), SFINGEC (average quality) and SFINGED (poor quality).

Figure 1.12.Examples of methods used in selecting enrollment samples

Figure 1.13.Representation of performance as a function of reference choice: worst choice (black), best choice (green), choice using metric 1 (blue) and choice using metric 2 (red)

Seven current fingerprint quality metrics were tested:

1)

NFIQ

: this metric classifies fingerprint images by five quality levels, based on a neural network (Tabassi

et al

. 2011). This metric has served as the industry standard for the past 15 years, and is included in all commercial biometric systems.

2)

NFIQ 2.0

: Olsen

et al

. (2013) trained a two-layer self-organizing map (SOM neural network) to obtain a SOM unit activation histogram. The trained characteristic is then input into a random forest in order to estimate genuine matching scores. NFIQ 2.0 is the new ISO standard for measuring fingerprint quality.

3)

OCL

: Lim

et al

. (2002) developed a quality measure based on a weighted combination of local and global quality scores, estimated as a function of several characteristics, such as the orientation certainty level.

4)

QMF