Screen Damage - Michel Desmurget - E-Book

Screen Damage E-Book

Michel Desmurget

0,0
18,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

All forms of recreational digital consumption - whether on smartphones, tablets, game consoles or TVs - have skyrocketed in the younger generations. From the age of 2, children in the West clock up more than 2.5 hours of screen time a day; by the time they reach 13, it's more than 7 hours a day. Added up over the first 18 years of life, this is the equivalent of almost 30 school years, or 15 years of full-time employment. Most media experts do not seem overly concerned about this situation: children are adaptable, they say, they are 'digital natives', their brains have changed and screens make them smarter. But other specialists - including some paediatricians, psychiatrists, teachers and speech therapists - dispute these claims, and many parents worry about the long-term consequences of their children's intensive exposure to screens. Michel Desmurget, a leading neuroscientist, has carefully weighed up the scientific evidence concerning the impact of the digital activities of our children and adolescents, and his assessment does not make for happy reading: he shows that these activities have significant detrimental consequences in terms of the health, behaviour and intellectual abilities of young people, and strongly affect their academic outcomes. A wake-up call for anyone concerned about the long-term impacts of our children's over-exposure to screens.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 621

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright Page

Epigraph

Introduction: who should we believe?

Notes

Part One

Digital natives: building a myth

‘A different generation’

No convincing evidence

A surprising technical ineptitude

Political and commercial interests

‘A more developed brain’

A pleasing fiction

Dubious shortcuts

In conclusion

Notes

Part Two

Uses: an incredible frenzy of screens for recreation

Estimates that are necessarily approximate

Childhood: exposure

Getting your foot in the stirrup: 0–1 years

The first level: 2–8 years

Pre-adolescence: amplification

Adolescence: submersion

Family environment: aggravating factors

Limiting access and setting an example

Making rules is effective!

Reorienting activities

What are the limits to the use of screens?

Whether you’re an addict or not, enough is enough

The importance of age

No screens for recreation before (at least) 6 years

Above the age of 6 years, less than an hour a day

In conclusion

Notes

Part Three

Impacts: chronicles of a disaster foretold

Notes

1. Preamble: multiple and intricate impacts

Notes

2. Academic success: a powerful prejudice

Home screens mean poorer academic performance

The more screen time increases, the more marks drop

A broad and long-standing consensus on television

There is also no doubt about video games

Ditto for the smartphone

A usage effect for computers and social networks

And in the end, it’s always the dumbing-down use that wins

Contradictory data?

An inevitable statistical variability

When the buzz trumps information

‘Digital entertainment does not affect school performance’

‘Playing video games improves school results’

One study among many?

Unreliable data

The wonderful world of digital tools at school

What are we talking about?

Disappointing results, to say the least

Above all, a source of distraction

A logic that is more economic than educational

Classes without teachers?

The Internet, or the illusion of available knowledge

In conclusion

Notes

3. Development: a damaging environment

Amputated human interactions

A human being is not the same ‘on video’ as ‘in real life’

More screens means less communication and sharing

A mutilated language

Early influences

A clearly identified causality

The sad illusion of ‘educational’ programmes

After the early years, reading is a sine qua non

Fighting dyslexia with video games

Optimized visual attention (and other alleged virtues of action video games)

Gamers more creative?

Players better equipped to work in groups?

Gamers more attentive and faster?

Players with better focus?

A ruined capacity for concentration

Overwhelming evidence

Learning to disperse attention

Multitasking

Making inattention a core aspect of the brain

In conclusion

Notes

4. Health: a silent aggression

Harsh impact on sleep

While we sleep, the brain is working

Health, emotions, intelligence: sleep controls everything

Sleeping less and less well because of screens

A ‘minor’ impact?

A devastating sedentary lifestyle

Sitting down damages your health

Immobility threatens development

Unconscious but deep influences

Memory: a machine for creating links

Behaviour: the weight of unconscious representations

Selling death in the name of ‘culture’

Smoking

Alcoholism

Obesity

The impact of norms

At the origins of the middle class

Negative body images

A profound impact on sexual representations

Violence

A long-standing debate

What the data say

The war of correlations

When justice steps in

In conclusion

Notes

Epilogue: a very old brain for a brave new world

What are the points to remember?

What should be done?

Seven fundamental rules

Before 6 years

After 6 years

Fewer screens means more life

A glimmer of hope?

Notes

Index

End User License Agreement

List of Tables

Chapter 4

Table 1. Impact of lack of sleep on the individual. When sleep is chronically impaired, a...

Table 2. Prevalence of smoking scenes in the cinema. All films on the North American mark...

List of Illustrations

Part 1

Figure 1. Time spent on digital technology by pre-teens and adolescents. Top: variability ...

Figure 2. Time spent on digital uses at home for entertainment (recreational) and schoolwo...

Chapter 2

Figure 3. Impact of total screen time on school performance. What is measured is the ...

Figure 4. Impact of digital investments on school performance. The figure considers the re...

Chapter 3

Figure 5. The phenomenon of ‘video deficit’. Children 12 to 30 months old se...

Figure 6. The richness of language is concentrated in the written word. The linguistic com...

Figure 7. Video games and visual attention. Participants stare at the screen.

Row 1

...

Figure 8. Impact of screens on attention. The risk of observing attention deficit disorder...

Chapter 4

Figure 9. Effects of video games and action films on memorization. In the late afternoon, ...

Figure 10. A broad consensus on the impact of violent content. Researchers (in communicatio...

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

vi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

289

290

291

292

293

294

295

296

Screen Damage

The Dangers of Digital Media for Children

Michel Desmurget

Translated by Andrew Brown

polity

Copyright Page

Originally published in French as La fabrique du crétin digital © Editions du Seuil, 2019, © Editions du Seuil, 2020 for the abridged and updated version

This English edition © Polity Press, 2023

This book is supported by the Institut français (Royaume-Uni) as part of the Burgess programme.

Polity Press

65 Bridge Street

Cambridge CB2 1UR, UK

Polity Press

111 River Street

Hoboken, NJ 07030, USA

All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.

ISBN-13: 978-1-5095-4639-8

ISBN-13: 978-1-5095-4640-4 (paperback)

A catalogue record for this book is available from the British Library.

Library of Congress Control Number: 2022935229

by Fakenham Prepress Solutions, Fakenham, Norfolk NR21 8NL

The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.

Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.

For further information on Polity, visit our website: politybooks.com

Epigraph

We must not reassure ourselves by thinking that the barbarians are still far away; for, while there are some peoples who allow the light to be snatched from their hands, there are others who stamp it out with their own feet.

Alexis de Tocqueville, De la Démocratie en Amérique

INTRODUCTION WHO SHOULD WE BELIEVE?

The truth is there. The only things we invent are lies.

Georges Braque1

The consumption of digital media in every form (smartphones, tablets, television, game consoles, etc.) by the younger generations, for recreational purposes, is astronomical. During their first two years, children in Western countries spend, on average, nearly 50 minutes on screen time every day. Between 2 and 8 years, the figure rises to 2 hours 45 minutes. Between 8 and 12 years, it reaches 4 hours 45 minutes. Between 13 and 18 years, it exceeds 7 hours 15 minutes. Expressed as an annual total, that means more than 1,000 hours for a kindergarten pupil (1.4 months), 1,700 hours for a primary school pupil (2.4 months) and 2,650 hours for a secondary school pupil (3.7 months). Formulated as a percentage of waking time, these values represent, respectively, 20%, 32% and 45%. Added up over the first 18 years of life, this is the equivalent of almost 30 school years or, if you prefer, 15 years of full-time paid employment.

Far from being alarmed, many media experts seem to welcome the situation. Psychiatrists, academics, paediatricians, sociologists, consultants, journalists and so on make endless indulgent statements to reassure parents and the public. Times have changed, they say, and the world now belongs to the aptly named ‘digital natives’. The very brains of the members of this post-digital generation have been modified – for the better, obviously. Their brains turn out, we are told, to be faster, more responsive, more suited to parallel processing, more competent at synthesizing immense flows of information, and better adapted to collaborative work. These developments represent an extraordinary opportunity for schools. They provide, it is claimed, a unique opportunity to overhaul education, stimulate pupils’ motivation, fuel their creativity, overcome academic failure and tear down the walls of social inequalities.

Unfortunately, this enthusiasm is far from unanimous. Many specialists denounce the deeply negative influence of current digital usage on development. All the dimensions of our humanity are affected, they point out, from the somatic (e.g. obesity, cardiovascular maturation) to the cognitive (e.g. language, concentration) and the emotional (e.g. aggressiveness, anxiety). All these assaults cannot ultimately leave academic success unscathed. Regarding that, moreover, it appears that the digital practices of the classroom for instructional purposes are not particularly beneficial either, as most of the available impact studies seem to indicate, including the famous international PISA evaluations. The director of this programme recently explained, about the process of digitizing education, that ‘if anything, it makes things worse’.2

In line with these fears, several individuals and institutional players have opted to play safe. In England, for example, several head teachers have threatened to send the police and social services into homes that let their children play violent video games.3 In Taiwan, a country whose schoolchildren are among the most successful in the world,4 there is a law that lays down heavy fines for parents who, on the one hand, expose children under 24 months to any digital application whatsoever and, on the other, do not sufficiently limit the time that 2- to 18-year-olds spend on such activities (the stated objective is not to exceed 30 consecutive minutes).5 In China, the authorities have taken drastic measures to regulate video game use among minors, on the grounds that this negatively impacts on their education.6 Indeed, children and adolescents there are no longer allowed to play during the time slot normally reserved for sleep (10 p.m.–8 a.m.) or to exceed 90 minutes of daily exposure during the week (180 minutes at weekends and during school holidays). In the United States, a number of senior executives in the digital industries, including Steve Jobs, the legendary former boss of Apple, seem very keen to protect their offspring from the various ‘digital tools’ they market.7 It would even seem, says the New York Times, that ‘a dark consensus about screens and kids begins to emerge in Silicon Valley’.8 This consensus is apparently strong enough to go beyond the domestic context; these geeks feel impelled to enrol their children in expensive private schools where there are no screens.9,10 As Chris Anderson, former editor of Wired and now chief executive of a robotics company, explains: ‘my [five] kids [6 to 17] accuse me and my wife of being fascists and overly concerned about tech, and they say that none of their friends have the same rules. That’s because we have seen the dangers of technology firsthand. I’ve seen it in myself, I don’t want to see that happen to my kids.’7 In his view, ‘on the scale between candy and crack cocaine, it’s closer to crack cocaine’.8 As the French journalist Guillaume Erner, who holds a doctorate in sociology, puts it: ‘The moral of the story is: you can put your children in front of screens, but those who make the screens will continue to put their children in front of books.’11

Who should we believe? At the heart of this tangle of contradictions, who is bluffing, and who is wrong? Where is the truth? Do our children, nourished by screens, comprise ‘the smartest generation ever’, as Don Tapscott, a consultant specializing in the impact of new technologies, assures us,12 or are they rather ‘the dumbest generation’, as Mark Bauerlein, professor of English at Emory University, puts it?13 More generally, is the current ‘digital revolution’ an opportunity for our offspring or a grim mechanism for creating imbeciles? The point of this book is to answer that question. For the sake of clarity, the analysis is organized into three main parts. The first assesses the reality of the original basic concept, one still very much alive, of the ‘digital native’. The second analyses the twofold qualitative and quantitative nature of the digital activities of our children and adolescents. The third examines the impact of these activities. Different fields are then considered: academic success, development and health. Before continuing, however, three points must be clarified.

First, although it attempts to conform to the most rigorous academic standards, this book does not meet the formal criteria of scientific writing. This is mainly because it hopes to be accessible to everyone, parents, health professionals, teachers, students, etc. But also because it is fuelled by real anger. I am stunned by the partial, biased and unfair nature of the way many mainstream media treat screens. As we will see throughout the book, there is a huge gap between the unsettling reality of available evidence and the frequently reassuring (or indeed enthusiastic) content of journalistic discourses. This disparity, however, is not in the least surprising. It simply reflects the economic power of the digital recreation industries. Each year, these generate billions in profits. However, if recent history has taught us anything, it is that our industrial friends do not easily give up the profits they amass, even if this is detrimental to consumer health. At the heart of this war waged by mercantilism against the common good is a powerful armada of complacent scientists, overzealous lobbyists and professional merchants of doubt.14 Tobacco, medicine, food, global warming, asbestos, acid rain, etc. – the list is full of instructive precedents.14–25 It would be surprising if the digital recreational sector had escaped this assault. So I stand fully behind the sometimes mordant form of the present work, even though I understand that the emotions expressed in it might jar with the usual way one imagines a cold and objective science, a science which, by nature, is supposed to be incompatible with any form of emotional expression. I don’t believe in this disembodiment. In writing this book, I was especially keen not to produce a boring essay, impersonal and stiff. Beyond the data which constitute the indisputable heart of this document, I wanted to share with the reader both my concerns and my indignation.

Second, my aim is not to tell anyone what to do, believe or think. Nor do I seek to stigmatize screen users or pass any critical judgement on the educational practices of any particular parent. I simply wish to inform readers by offering them as exhaustive, precise and sincere a synthesis as possible of the existing scientific knowledge. Of course, I understand the usual argument that we have to stop guilt-tripping people, worrying and alarming them by creating unnecessary ‘moral panics’ around screens. I also understand the army of self-righteous people who explain to us that these panics are the products of our fears and that they come with every form of societal or technological advance. The frightened little group of reactionary obscurantists – they tell us – have already tried to put the wind up us, for example, in connection with the pinball machine, the microwave, rock’n’roll, printing and writing (denounced in his time by Socrates for its possible impact on memory).

Unfortunately, however alluring they may be, these considerations are flawed. The thing is, if I may say so, that there are no studies establishing the harmfulness of pinball, microwave or rock’n’roll. At the same time, there is a solid body of work highlighting the positive influence on people’s development of books and being able to read and write.26,27 Therefore, what rules out a hypothesis is not its initial formulation but its ultimate evaluation. Some people were afraid of rock’n’roll. This fear was groundless; end of story. Others worried about writing. An extensive scientific literature has invalidated this fear; hurrah for that! The same goes for screens. Never mind the hysterical fears of the past. Only current scientific information should count: what does it say, where does it come from, is it reliable, how consistent is it, what are its limits, etc.? It is by answering these questions that everyone will be able to make an informed decision, not by muddying the waters and resorting to the well-worn evasions of alarmism, guilt or moral panic.

Third and finally, there is no question here of rejecting ‘the’ digital world as a whole and demanding, without further ado, the return of the wired telegraph, Pascal’s calculating machine or tube radios. This text really is in no way technophobic! In many areas – linked, for example, to health, telecommunications, air transport, agricultural production and industrial activity – the extremely fruitful contribution of digital technology cannot be disputed. Who would complain about seeing automata operating in fields, mines or factories and performing all kinds of brutal, repetitive and destructive tasks which until then had to be carried out by men and women at the cost of their health? Who can deny the enormous impact that computing, simulation, data storage and data sharing tools have had on scientific and medical research? Who can question the value of software for word processing, management, and mechanical and industrial design? Who would dare to say that the existence of appropriate educational and documentary resources, freely accessible to all, is not a benefit? Nobody, of course. However, these indisputable benefits should not mask the existence of far more damaging advances, particularly in the field of recreational consumption – especially because, as we will have the opportunity to see in detail, this consumption alone counts for almost all of the digital activities of the younger generations. In other words, when the arsenal of screens on offer today (tablets, computers, consoles, smartphones, etc.) is made available to children and adolescents, they do not put them to clearly positive uses, but exploit them in an orgy of recreational usage which, as research irrevocably shows, is harmful. Certainly, if children and adolescents focused their practices on the most positive things that digital media can offer, this book would not need to have been written.

Notes

 

1

  Braque G.,

Le Jour et la Nuit

, Gallimard, 1952.

 

2

  Schleicher A., in ‘Une culture qui libère?’, round table organized by the newspaper

Libération

, Université catholique de Lyon, 19 September 2016.

 

3

  Carter C., ‘Head teachers to report parents to police and social services if they let their children play Grand Theft Auto or Call of Duty’,

dailymail.co.uk

, 2015.

 

4

  OECD, ‘PISA 2018 results (volume 1)’,

oecd.org

, 2019.

 

5

  Phillips T., ‘Taiwan orders parents to limit children’s time with electronic games’,

telegraph.co.uk

, 2015.

 

6

  Hernandez J. et al., ‘90 minutes a day, until 10 p.m.: China sets rules for young gamers’,

nytimes.com

, 2019.

 

7

  Bilton N., ‘Steve Jobs was a low-tech parent’,

nytimes.com

, 2014.

 

8

  Bowles N., ‘A dark consensus about screens and kids begins to emerge in Silicon Valley’,

nytimes.com

, 2018.

 

9

  Richtel M., ‘A Silicon Valley school that doesn’t compute’,

nytimes.com

, 2011.

10

 Bowles N., ‘The digital gap between rich and poor kids is not what we expected’,

nytimes.com

, 2018.

11

 Erner G., ‘Les geeks privent leurs enfants d’écran, eux’,

huffingtonpost.fr

, 2014.

12

 Tapscott D., ‘New York Times cover story on “growing up digital” misses the mark’,

huffingtonpost.com

, 2011.

13

 Bauerlein M.,

The Dumbest Generation

, Tarcher/Penguin, 2009.

14

 Oreskes N. et al.,

Merchants of Doubt

, Bloomsbury, 2010.

15

 Petersen A. M. et al., ‘Discrepancy in scientific authority and media visibility of climate change scientists and contrarians’,

Nat Commun

, 10, 2019.

16

 Glantz S. A. et al.,

The Cigarette Papers

, University of California Press, 1998.

17

 Proctor R.,

Golden Holocaust

, University of California Press, 2012.

18

 Angell M.,

The Truth About the Drug Companies

, Random House, 2004.

19

 Mullard A., ‘Mediator scandal rocks French medical community’,

Lancet

, 377, 2011.

20

 Healy D.,

Pharmageddon

, University of California Press, 2012.

21

 Goldacre B.,

Bad Pharma

, Fourth Estate, 2014.

22

 Gotzsche P.,

Deadly Psychiatry and Organized Denial

, People’s Press, 2015.

23

 Leslie I., ‘The sugar conspiracy’,

theguardian.com

, 2016.

24

 Holpuch A., ‘Sugar lobby paid scientists to blur sugar’s role in heart disease – report’,

theguardian.com

, 2016.

25

 Kearns C. E. et al., ‘Sugar industry and coronary heart disease research: a historical analysis of internal industry documents’,

JAMA Intern Med

, 176, 2016.

26

 Cunningham A. et al.,

Book Smart

, Oxford University Press, 2014.

27

 Cunningham A. et al., ‘What reading does for the mind’,

Am Educ

, 22, 1998.

PART ONE DIGITAL NATIVESBuilding a myth

A [good] liar begins by making the lie seem like a truth, and ends up making the truth seem like a lie.

Alphonse Esquiros1

The ability of certain journalists, politicians and media experts to spread, quite uncritically, the most extravagant fables put about by the digital industry is quite breathtaking. We could just shrug these fables off with a smile. But that would be to ignore the power of repetition. Indeed, by dint of being reproduced, these fables end up becoming, in the collective mind, real facts. We then leave the field of substantiated debate and approach the space of urban legend – of a story ‘that is held to be true, sounds plausible enough to be believed, is based primarily on hearsay, and is widely circulated as true’.2 So if you repeat often enough that the younger generations have different brains and learning styles because of their phenomenal digital literacy, people eventually believe it; and when they believe it, their whole view of children, of learning and of the educational system is affected. Deconstructing the legends that pollute thought is therefore the first essential step towards an objective and fruitful reflection on the real impact of digital technology.

‘A different generation’

In the wonderful digital world, there are many different fictions. Yet, in the final analysis, almost all of them rest on the same basic illusion: screens have fundamentally transformed the intellectual functioning and relationship to the world of young people, now called ‘digital natives’.3–7 For the missionary army of the digital catechism,

three salient features characterize this [younger] generation: zapping, impatience and the collective. They expect immediate feedback: everything has to go fast, if not very fast! They like to work in a team and have an intuitive, even instinctive, cross-sectional digital culture. They have understood the strength of the group, of mutual aid and of collaborative work […] Many shy away from demonstrative, deductive reasoning, ‘step by step’ argumentation, in favour of trial and error encouraged by hypertext links.8

Digital technologies are now ‘so intertwined with their lives that they are no longer separable from them […] Having grown up with the Internet and then social networks, they tackle problems by relying on experimentation, exchanges with those around them, and cross-functional cooperation on given projects.’9 Let’s face it, these kids ‘are no longer “little versions of us,” as they may have been in the past. […] They are native speakers of technology, fluent in the digital language of computers, video games, and the Internet.’10 ‘They’re fast, they can multitask and they can zap easily.’11

These developments are so profound that they render all old-world pedagogical approaches definitively obsolete.8,12–14 It is no longer possible to deny the reality: ‘our students have changed radically. Today’s students are no longer the people our educational system was designed to teach. […] [They] think and process information fundamentally differently from their predecessors.’7

In fact, they are so different from us that we can no longer use either our 20th century knowledge or our training as a guide to what is best for them educationally. […] Today’s students have mastered a large variety of [digital] tools that we will never master with the same level of skill. From computers to calculators to MP3 players to camera phones, these tools are like extensions of their brains.10

Lacking the appropriate training, therefore, current teachers are no longer up to speed, they ‘speak an outdated language (that of the pre-digital age)’.7 Certainly, ‘it is time to move on to another type of pedagogy that will consider the changes in our society’,15 because ‘yesterday’s education will not permit us to train the talents of tomorrow’.16 And in this context, the best thing would be to give our prodigious digital geniuses the keys to the system as a whole. Freed from the archaisms of the old world, ‘they will be the single most important source of guidance on how to make their schools relevant and effective places to learn’.17

We could fill dozens of pages with pleadings and proclamations of this kind. But that would hardly be of any interest. Indeed, beyond its local variations, this torrent of verbal diarrhoea is still centred on three major propositions: (i) the omnipresence of screens has created a new generation of human beings, totally different from the previous ones; (ii) members of this generation are experts in handling and understanding digital tools; (iii) to maintain any effectiveness (and credibility), the education system must adapt to this revolution.

No convincing evidence

For fifteen years now, the validity of these claims has been methodically assessed by the scientific community. Here again – unsurprisingly – the results obtained directly contradict the blissful euphoria of fashionable fictions.2,18–27 As a whole, ‘the digital native literature demonstrates a clear mismatch between the confidence with which claims are made and the evidence for such claims’.26 In other words, ‘to date, there is no convincing evidence to support these claims’.23 All these ‘generational stereotypes’23 are clearly ‘an urban legend’2 and the least one can say is that ‘the optimistic portrayal of younger generations’ digital competences is poorly founded’.28 The conclusion? All the available elements converge to show that ‘digital natives are a myth in their own right’,19 ‘a myth which serves the naïve’.29

In practice, the major objection raised by the scientific community to the concept of digital natives is disconcertingly simple: the new generation supposedly referred to by these terms does not exist. Undeniably, one can always find, by looking carefully, a few individuals whose consumption habits vaguely correspond to the stock stereotype of the over-competent geek glued to his screens; but these reassuring paragons are more the exception than the rule.30,31 As a whole, the so-called ‘Internet generation’ is much more akin to ‘a collection of minorities’32 than a cohesive group. Within this generation, the extent, nature and expertise of digital practices vary considerably with age, gender, type of studies pursued, cultural background and/or socio-economic status.33–40 Consider, for example, the time spent on recreational uses (figure 1, top). Contrary to the myth of a homogeneous over-connected population, the data report a great diversity of situations.41 Thus, among pre-teens (8–12 years), daily exposure varies more or less consistently from ‘nothing’ (8% of children) to ‘insane’ (more than 8 hours, 15%). Among adolescents (13–18 years old) these disparities remain notable, even if they decrease a little to the benefit of significant users (62% of adolescents spend more than 4 hours per day on their screens for recreation). To a large extent, this heterogeneity aligns with the socio-economic characteristics of the household. Disadvantaged subjects thus display a very significantly longer average exposure (about 1 hour 45 minutes per day) than their privileged counterparts.41

Figure 1. Time spent on digital technology by pre-teens and adolescents. Top: variability in time spent with screens for recreation. Bottom: variability in the use of screens for homework (in this case, the low daily usage time – pre-teens 22 minutes; teenagers 60 minutes – does not allow, as for screens for recreation, depiction in terms of slices of time). Some totals do not add to 100% due to rounding up.41

Unsurprisingly, the tangled picture becomes even more complex when we include domestic uses related to the field of education (figure 1, bottom). Indeed, in this area too, the degree of inter-individual variability is considerable.41 Take the pre-teens. These are distributed roughly evenly between daily (27%), weekly (31%), exceptional (monthly or less often, 20%) and non-existent (never, 21%) users. The disparity remains among adolescents, even if it then tends to diminish due to the high proportion of daily users (59%; these represented just 29% in 2015,35 which reflects, as we shall be seeing in greater detail, the strong current movement towards digitalization in teaching). Once again, the family socio-economic gradient represents an important explanatory variable.41 Thus, among 13- to 18-year-olds, privileged pupils are significantly more likely than their disadvantaged counterparts to use a computer every day for their homework (64% as against 51%, with an average duration of 55 minutes as against 34). However, disadvantaged adolescents tend to use their smartphones more (21 minutes compared to 12).41 In short, presenting all these kids as a uniform generation, with homogeneous needs, behaviours, skills and learning styles, simply does not make sense.

A surprising technical ineptitude

Another essential objection regularly raised by the scientific community to the concept of digital natives concerns the supposed technological superiority of the younger generations. Immersed in the digital world, they are said to have acquired a degree of mastery forever inaccessible to the fossils of the pre-digital ages. This is a nice story – which unfortunately also poses several major problems. First, it is, until we have proof to the contrary, these same brave pre-digital fossils who ‘were [and often remain!] the creators of these devices and environments’.42 Also, contrary to popular belief, the overwhelming majority of our budding geeks display, beyond the most utterly elementary recreational uses, a level of mastery of digital tools that is wobbly, to put it mildly.28,36,43–6 The problem is so pronounced that a recent report by the European Commission placed the ‘student’s low digital competence’ at the top of the list of factors that could hinder the digitization of the education system.47 It must be said that, to a very large extent, these young people struggle to gain the most rudimentary computer skills: setting the security of their terminals; using standard office programs (word processing, spreadsheets, etc.); making video files; writing a simple program (whatever the language); configuring backup software; setting up a remote connection; adding memory to a computer; launching or disabling the execution of certain programs when the operating system starts up, etc.

And it gets worse. Indeed, beyond these glaring technical ineptitudes, the younger generations also experience appalling difficulties in processing, sorting, ordering, evaluating and synthesizing the gigantic masses of data stored in the bowels of the Web.48–53 According to the authors of a study devoted to this issue, believing that members of the Google generation are experts in the art of digital information retrieval ‘is a dangerous myth’.48 This depressing finding is corroborated by the conclusions of a large-scale study published by researchers at Stanford University. For them,

overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak. Our ‘digital natives’ may be able to flit between Facebook and Twitter while simultaneously uploading a selfie to Instagram and texting a friend. But when it comes to evaluating information that flows through social media channels, they are easily duped. […] In every case and at every level, we were taken aback by students’ lack of preparation. […] Many assume that because young people are fluent in social media they are equally savvy about what they find there. Our work shows the opposite.43

Ultimately, this incompetence is expressed with ‘a stunning and dismaying consistency’. For the authors of the study, the problem goes so deep that it constitutes nothing less than a ‘threat to democracy’.

Certainly, these results are not surprising insofar as, in terms of digital capabilities, ‘digital natives’ use technology in a set of ways both ‘limited’34 and ‘unspectacular’.27 As we will see in detail in the next part, the practices of the younger generations revolve primarily around recreational activities that are, to put it mildly, basic and not very educational: TV programmes, films, series, social networks, video games, shopping sites, promos, musical and other videos of various kinds, etc.35,41,54–6 On average, pre-teens spend 2% of their screen time on content creation (‘such as writing, or making digital art or music’).41 Only 3% say they write computer programs frequently. These percentages rise to 3% and 2% respectively for adolescents. As the authors of a large-scale study of usage write: ‘Despite the new affordances and promises of digital devices, young people devote very little time to creating their own content. Screen media use continues to be dominated by watching TV and videos, playing games, and using social media; use of digital devices for reading, writing, video chatting, or creating content remains minimal.’41 This conclusion also seems to hold for supposedly ubiquitous academic uses. On average, these represent a quite minor fraction of total screen time: less than 8% among pre-teens and 14% among adolescents (13–18 years). In other words, as figure 2 illustrates, when using their screens, 8- to 12-year-olds spend 13 times more time on entertainment than on study (284 minutes as against 22 minutes). For 13- to 18-year-olds, it is 7.5 times (442 minutes as against 60 minutes).41

Figure 2. Time spent on digital uses at home for entertainment (recreational) and schoolwork (homework) by pre-teens (8–12 years) and adolescents (13–18 years). For details, see the text.41

In this context, believing that digital natives are experts of the megabyte is the same as mistaking my pedal cart for an interstellar rocket; it’s the same as believing that the simple act of mastering a computer application enables the user to understand anything about the physical elements and software involved. Perhaps this was (somewhat) the case ‘before’, in the heyday of early DOS and UNIX, when even installing a simple printer meant embarking on a Homeric journey. It’s interesting, in any case, to relate this idea to the results of an academic study reporting that personal recreational use of a computer was positively correlated with students’ mathematical performances in the 1990s, and more so in the 2000s (the age of millennials).57 This is understandable if we remember that the use and function of home computers have changed drastically in two decades. For today’s children and teenagers, as we have just said, these tools, which can be consumed endlessly without any effort or special skills, are mainly used for entertainment. Today, everything is pretty much ‘plug and play’. Never has the distance between ease of use and complexity of implementation been so great. It’s now about as necessary for the average user to understand how their smartphone, TV or computer works as it is useful for Sunday gourmets to master the intricacies of the culinary art in order to eat at the Ritz; and (above all!) it’s crazy to think that the mere fact of eating regularly in a large restaurant will allow just anyone to become an expert cook. In the culinary field, as in the computer field, there is the person who uses and the person who designs – and the former clearly does not need to grasp all the secrets of the latter.

For those who doubt this, a short detour through the population of ‘digital immigrants’a,7 might prove rewarding. Indeed, several studies report that adults are generally just as competent23,34,38 and diligent58–60 in digital matters as their young descendants. Even seniors manage, without great difficulty, when they deem it useful, to gain access to this new universe.61 Take, for example, my friends Michèle and René. These two retirees have both clocked up over 70 years; they were born long before the spread of television and the birth of the Internet. They were 30 before they got their first landline phone. This hasn’t stopped them, today, owning a giant flat screen, two tablets, two smartphones and a desktop computer; they can order their plane tickets on the Internet; use Facebook, Skype, YouTube and a video-on-demand service; and play video games with their grandchildren. More connected than her partner, Michèle is forever feeding the Twitter account of her walking club with selfies and punchlines.

Frankly, how can you believe for a single second that such practices are likely to turn anyone into a computer maestro or some coding genius? Any idiot can pick up these tools in a matter of minutes. After all, they have been thought out and designed with that in mind. So, as a senior executive in Google’s communications department who chose to put his children in a primary school without screens explained to the New York Times recently, using these kinds of apps is ‘supereasy. It’s like learning to use toothpaste. At Google and all these places, we make technology as brain-dead easy to use as possible. There’s no reason why kids can’t figure it out when they get older.’62 In other words, as the American Academy of Pediatrics explains, ‘do not feel pressured to introduce technology early; interfaces are so intuitive that children will figure them out quickly once they start using them at home or in school’.63 On the other hand, if the cardinal dispositions of childhood (and adolescence) have not been sufficiently mobilized, it is generally too late subsequently to learn to think, reflect, maintain one’s concentration, make an effort, control language beyond its rudimentary bases, hierarchize the broad flows of information produced by the digital world or interact with others. Basically it all comes down to a plain and simple question of timing. On the one hand, as long as you spend a minimum amount of time on it, a late conversion to digital won’t stop you from becoming as agile as the most seasoned digital natives. On the other hand, premature immersion will inevitably distract you from essential learning which, due to the progressive closing of the ‘windows’ of brain development, will become increasingly difficult to accomplish.

Political and commercial interests

So, obviously, the idyllic media portrait of digital natives lacks a bit of factual substance. This is a bore; but it’s not surprising. Indeed, even if we turn completely away from the facts to stick to a strictly theoretical interpretation, the ludicrousness of this dismal tale continues to be perfectly obvious. Take the quotes presented throughout this chapter. They affirm with the most finger-wagging seriousness that digital natives represent a mutant group that is at the same time hyper-connected, dynamic, impatient, zapping, multitasking, creative, fond of experimentation, gifted for collaborative work, etc. But ‘mutant’ also means ‘different’. Therefore, what is implicitly reflected here is also the image of a previous generation that was miserably lonely, amorphous, slow, patient, single-task, devoid of creativity, unfit for experimentation, resistant to collective work, etc. This is an odd picture; at a minimum, it suggests two lines of thought. The first questions the efforts made to positively redefine all sorts of psychological attributes that have long been known to be highly detrimental to one’s intellectual performance: dispersion, zapping, multitasking, impulsiveness, impatience, etc. The second questions the anarchic and surreal relentlessness devoted to caricaturing the pre-digital generations as fuddy-duddies. It makes you wonder how the pathetic, individualistic, slug-like cluster of our ancestors survived the throes of Darwinian evolution. As the teacher and researcher in educational theory Daisy Christodoulou writes, in a very well documented book in which she deliciously disassembles the founding myths of the new digital pedagogies, ‘it is quite patronising to suggest that no one before the year 2000 ever needed to think critically, solve problems, communicate, collaborate, create, innovate or read’.64 Likewise, one might add, it is truly ludicrous to suggest that the world ‘before’ was made up of unsociable hermits. With all due respect to the voracious technogobblers of all stripes, despite the absence of email and social media, baby boomers by no means lived isolated in some ocean of solitude. People who wished to do so easily managed to communicate, exchange, love one another and maintain strong bonds, even from a distance. There was the telephone and there was the postal service. As a child, I spoke every week to my Aunt Marie in Germany. I also wrote to my cousin Hans-Jochen, after each game won by Bayern Munich, the legendary football team of which he was a devoted fan. He always replied, sometimes with a simple card, sometimes with a small package in which I found a key ring, a mug or a club shirt. Those who doubt these realities should also look at the impressive correspondence from writers such as Rainer Maria Rilke, Stefan Zweig, Victor Hugo, Marcel Proust, George Sand and Simone de Beauvoir, and the many letters, often poignant, sent to their families by soldiers at the front during the Great War.65

I can obviously understand the marketing interest of the current caricatures. But frankly, they are all singularly lacking in seriousness. Take education, as a final example. When a French parliamentarian, supposedly a specialist in education issues, the author of two official reports on the importance of information technologies for schools,66,67 allows himself to write such hair-raising statements as ‘digital technology allows us to set up pedagogies of self-esteem, experience, and learning’,8 one can only hesitate between laughter, anger and dismay. What does our dear MP mean? That in pre-digital classrooms there was no question of pedagogy, or of experimentation, or of self-esteem? Fortunately, such distinguished educationalists as Rabelais, Rousseau, Montessori, Freinet, La Salle, Wallon, Steiner and Claparède are no longer there to hear the insult. And then, really – what an incredible revolution! Just think: ‘a pedagogy of learning’. As if it could be otherwise; as if pedagogy did not intrinsically name a kind of art of teaching (and therefore of learning); as if any pedagogy could set out to produce stiff conformity, stupefaction and stagnation. There is something a little scary about realizing that it is this kind of hollow and ridiculous rhetoric that drives education policy in our schools.

‘A more developed brain’

The myth of the digital native often comes, as we have just pointed out, with that astonishing chimera, the mutant child. According to this strange view, the human lineage today can look forward to bright new prospects. Current evolution, certain specialists inform us, ‘may represent one of the most unexpected yet pivotal advances in human history. Perhaps not since Early Man first discovered how to use a tool has the human brain been affected so quickly and so dramatically.’68 Oh yes: what you need to know is that ‘our brains are evolving right now – at a speed like never before’.68 Besides, make no mistake, our children are no longer truly human; they have become ‘extraterrestrials’,69 ‘mutants’.69,70 ‘They don’t have the same brains anymore.’71 They ‘think and process information fundamentally differently from their predecessors’.7 This generation ‘is smarter and quicker’4 and its neural circuitry is ‘wiring up for rapid-fire cybersearches’.72 Subjected to the beneficial action of screens of all kinds, our children’s brains have ‘developed differently’.4 They no longer have ‘the same architecture’73 and have been ‘improved, extended, enhanced, amplified (and liberated) by technology’.74 These changes are so deep and fundamental ‘that there is absolutely no going back’.7

All these ideas are supported mainly by evidence from the field of video games. Indeed, several brain imaging studies have convincingly demonstrated that the brain of gamers exhibited certain localized morphological disparities compared to the brain of the man or woman in the street.75–9 This has been a godsend for our valiant journalists, some of whom probably have no compunction about reaching for the joystick when necessary. Throughout the world, they gave these studies a triumphant reception, and splashed out on flashy headlines. Examples include: ‘playing video games can boost brain volume’;80 ‘video game enthusiasts have more grey matter and better brain connectivity’;81 ‘the surprising connection between playing video games and a thicker brain’;82 ‘video gaming can increase brain size and connectivity’;83 etc. Nothing less. It makes you wonder how sane adults can still deprive their children of such a windfall. Indeed, even if the idea is not precisely formulated, behind these titles we find a clear affirmation of competence: dear parents, thanks to video games, your children will have more developed and better connected brains, and this – as everyone will have understood – will increase their intellectual efficiency.

A pleasing fiction

Unfortunately, the myth, yet again, does not stand up to scrutiny for long. To get a sense of how much empty media nonsense this is, you have merely to understand that any persistent state and/or any repetitive activity changes the brain’s architecture.84 In other words, everything we do or experience changes both the structure and function of our brains. Some areas become thicker, others thinner; some connections develop, others become more tenuous. This is a characteristic of brain plasticity. In this context, it becomes obvious that the preceding titles can apply indiscriminately to any specific activity or recurring condition: juggling,85 playing music,86 consuming cannabis,87 having a limb amputated,88 driving a taxi,89 watching television,90 reading,91 playing sports,92 etc. However, to my surprise, I have never seen headlines in the press explaining, for example, that ‘watching television can boost brain volume’, that ‘smoking cannabis can increase brain size and connectivity’ or that there is a ‘surprising connection between limb amputation and a thicker brain’. Yet again, these headlines would have exactly the same relevance as those commonly put forward when it comes to video games. So frankly, to say that gamers have a different brain architecture is to go into raptures over a truism. You might as well trumpet the fact that water is wet. Of course, it is easy to understand the CEO of Ubisoft entering the fray to explain, in a documentary broadcast on a French public TV channel, that thanks to video games ‘we have more developed brains’.a,93 What is more difficult to admit is that supposedly well-trained and independent journalists continue to repeat, without the least critical distance, this kind of grotesque propaganda.

This crass sham seems even more blatant when we realize that the link between cognitive performance and brain thickness is far from unequivocal. Indeed, when it comes to brain function, bigger does not necessarily mean more efficient. In many cases, a thinner cortex is functionally more efficient, with the observed thinning reflecting a pruning of supernumerary or unnecessary connections between neurons.94 Take intelligence quotient (IQ). In adolescents and young adults, its development is associated with a gradual thinning of the cortex in a number of areas, especially the prefrontal area, that studies of the influence of video games have found to be thicker.95–7 Specific studies of these prefrontal areas have even linked the extra cortical thickness observed in gamers with a decrease in IQ.98 This negative relationship has also been described in frequent television viewers90 and pathological Internet users.99 So now is the time to face the facts: ‘a bigger brain’ is not a reliable marker of intelligence. In many cases, a cortex that is locally a bit on the plump side is the sign, not of any wonderful functional optimization, but of a sad lack of maturation.

Dubious shortcuts

The above-mentioned attention-grabbing ‘headlines’ are sometimes accompanied, it is true, by some specific assertions about the nature of the anatomical adaptations observed. So we are told, for example, that one study76 has just reported that the brain plasticity associated with the sustained use of Super Mario can be observed ‘in the right hippocampus, right prefrontal cortex and the cerebellum. These brain regions are involved in functions such as spatial navigation, memory formation, strategic planning and fine motor skills of the hands.’100,101 Basically, this kind of publishers’ gold dust is careful not to assert that a causal link exists between the anatomical changes observed and the functional aptitudes postulated, but the turn of the sentence strongly invites us to believe in the existence of such a link. Thus, the average reader will understand that the thickening of the right hippocampus improves spatial navigation and memorization potential; the thickening of the right prefrontal cortex signals the development of strategic thinking skills; and the thickening of the cerebellum marks an improvement in dexterity. This is impressive – but unfortunately unfounded.

Take the hippocampus. This structure is indeed central in the memorization process. But not in a uniform way. The posterior part of the right hippocampus, which thickens in gamers, is primarily involved in spatial memory. This means, as the authors of the study themselves admit, that what Super Mario users learn is to find their way around the game.76 In other words, the modifications observed here at the hippocampus level simply reflect the construction of a spatial map of the available paths and objects of interest inherent in this particular video game. The same type of transformation can be observed among taxi drivers when they gradually build up a mental map of their city.89 This poses two problems. First, this type of knowledge is highly specific, and therefore non-transferable: being able to orient yourself in the topographical tangle of Super Mariois of little use when it comes to finding your way on a road map or navigating your way through the spatial twists and turns of the real world.102 Second, and more fundamentally, this navigational memory has functionally and anatomically nothing to do with ‘memory’ as the term is generally understood. Playing Super Mario in no way increases the ability of practitioners to retain a pleasant memory, an English or history lesson, a foreign language, a multiplication table or any other knowledge whatsoever. Therefore, to imply that playing Super Mario has a positive effect on ‘memory formation’ is at best a category mistake, at worst gross bad faith. Let us add, for the sake of completeness, that recent work has reported that what was true for Super Mario was not necessarily true for first-person shooter games (where the player sees the action through the eyes of his or her avatar) that did not involve spatial learning.103 These games entail a reduction of grey matter in the hippocampus. However, as the authors of the study explicitly point out, ‘lower grey matter in the hippocampus is a risk factor for developing numerous neuropsychiatric illnesses’.103

The same is true of the right prefrontal cortex. This area supports a large number of cognitive functions, from attention to decision making, through the learning of symbolic rules, behavioural inhibition and spatial navigation.104–6 But, here again, there is no way of precisely linking any of these functions to the anatomical changes identified – something that the authors of the study readily acknowledge.76 In fact, when we look closely at the data, we see that the prefrontal adaptations resulting from heavy use of Super Mario are related solely to the desire to play! As the authors put it, ‘the reported desire to play the game leads to DLPFC [dorsolateral prefrontal cortex] growth’.76 In other words, this anatomical change could reflect a perfectly ordinary inducement of the reward systema of which the dorsolateral prefrontal cortex is a key element.104,107 Of course, the term ‘ordinary’ may seem ill-chosen when we know that the hypersensitivity of reward circuits, as developed by action video games, is closely associated with impulsivity and the risk of addiction.108–11 In fact, several studies have linked the thickening of the prefrontal areas considered here to a pathological use of the Internet and video games.99,112 These data are far from trivial when we remember that adolescence is a highly significant period in the maturation of the prefrontal cortex113–17 and, what is more, a time of extreme vulnerability for the acquisition and development of addictive, psychiatric and behavioural disorders.118–20 In this context, the anatomical changes gloated over by certain media could very well lay, not the foundations for a bright intellectual future, but the bases for a behavioural disaster yet to come; a hypothesis to which I will return in detail in the third part of this book (below).

That being said, even if all of the above reservations were rejected, the problem of generalization would still have to be considered. To imply that the prefrontal thickening seen in Super Mario users improves ‘strategic thinking’ skills is one thing; to show how this improvement can exist and be useful outside the specificities of the game is another, and very different, matter. Indeed, once the semantic syncretism of this catch-all concept has been removed, who can reasonably believe that ‘strategic thinking’ is a general skill, independent of the contexts and types of knowledge that have given it shape? So, for example, who can believe that there is something in common between the process of ‘strategic thinking’ entailed by Super Mario and the process required to play chess, complete a business negotiation, solve a maths problem, optimize a schedule or organize the arguments of an essay? The idea is not only absurd but also contrary to the most recent research reporting that there is hardly any transfer from video games to ‘real life’.121–9 In other words, playing Super Mario mainly teaches one how to play Super Mario. The skills thereby acquired are non-transferable. At best, they may extend to certain analogous activities subject to the same constraints as those imposed by the game.127,130

That leaves the cerebellum and the supposed improvement in dexterity. Here too there are obvious problems of interpretation and generalization. First, many other mechanisms could account for the anatomical adaptation observed (controlling postural stability and eye movement, learning to make connections between stimulus and response, etc.).131,132 Then, even if we accept the dexterity hypothesis, it is unlikely that the skill then acquired will be transferred beyond certain specific tasks that require us to control, via a joystick, the movement of an object we have located (for example, piloting a drone, or handling a computer mouse or a remote manipulator in surgery).133 Who can reasonably believe that playing Super Mario can promote the overall learning of fine visual skills such as playing the violin, writing, drawing, painting, hitting the ball in table tennis or building a Lego house? If there is one area where the extreme specificity of learning is now firmly established, it is that of sensorimotor skills.a,134

In conclusion

The main lesson of this part of the book is that digital natives do not exist. The digital mutant child, whose aptitude for tickling the smartphone has transformed him or her into a brilliant general practitioner of the most complex new technologies; whom Google Search has rendered infinitely more curious, agile and knowledgeable than any teachers of the pre-digital age could have done; who, thanks to video games, has gained a stronger and bigger brain; who, thanks to the filters of Snapchat and Instagram, has achieved the highest levels of creativity, and so on – this child is just a legend that can be found nowhere in the scientific literature. But such a child’s image, nonetheless, continues to haunt collective beliefs. And this is positively stupefying. Indeed, that such an absurdity could have emerged is, in itself, nothing out of the ordinary. After all, the idea deserved to be scrutinized. No, what is extraordinary is that such an absurdity persists through thick and thin, and, in addition, helps guide our public policies, especially in the educational field.

Beyond its folklore aspects, this myth obviously comes with ulterior motives.22 At the domestic level, first of all, it reassures parents by making them believe that their offspring are real geniuses of digital technology and complex thinking, even if, in fact, they only know how to use a few trivial (and expensive) apps. On the educational level, it also makes it possible – to the delight of a flourishing industry – to support the frenzied digitization of the system, despite what are, to say the least, worrying performances (I will come back to this in the third part). In short, everyone wins … except our children. But this is a problem that, quite clearly, nobody seems to care about.

Notes

 

a

  This expression is used to describe ‘older’ users, born before the digital age – they are deemed to be less competent than digital natives.

 

a

  Ubisoft is a major French company for the creation and distribution of video games.

 

a