23,99 €
Science and politics have collaborated throughout human history, and science is repeatedly invoked today in political debates, from pandemic management to climate change. But the relationship between the two is muddled and muddied.
Leading policy analyst Geoff Mulgan here calls attention to the growing frictions caused by the expanding authority of science, which sometimes helps politics but often challenges it.
He dissects the complex history of states’ use of science for conquest, glory and economic growth and shows the challenges of governing risk – from nuclear weapons to genetic modification, artificial intelligence to synthetic biology. He shows why the governance of science has become one of the biggest challenges of the twenty-first century, ever more prominent in daily politics and policy.
Whereas science is ordered around what we know and what is, politics engages what we feel and what matters. How can we reconcile the two, so that crucial decisions are both well informed and legitimate?
The book proposes new ways to organize democracy and government, both within nations and at a global scale, to better shape science and technology so that we can reap more of the benefits and fewer of the harms.
Also available as an audiobook.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 479
Veröffentlichungsjahr: 2023
Cover
Title Page
Copyright
Acknowledgements
Introduction: The science–politics paradox
Notes
Part I How Science Meets Power
1 Uneasy interdependence
1.1 How science challenges political ideals
1.2 Science and liberal democracy
1.3 The drive for sovereignty and its limits
Notes
2 What is science and how does it connect to power?
2.1 Observation: trying to see the world as it is
2.2 Interpretation and sense-making
2.3 Action
2.4 The collective nature of science
2.5 The idea of a scientific state
2.6 The political character of science
Notes
Part II How States Have Used Science
3 The ages of techne and episteme
3.1 Engineering in the service of power
3.2 Science as amplifier of state power
3.3 Science for war
3.4 Science for commerce and growth
3.5 Science for glory
3.6 Science for power over people
Notes
4 Science bites back
4.1 We (partly) choose what we fear
4.2 Predicting global and existential risks
Notes
5 The scientist’s view of politics as corruptor
5.1 The ideal of self-government
5.2 The philosophy of autonomy
5.3 Self-doubts
Notes
Part III The Problem of Truths and Logics
6 Master, servant and multiple truths
6.1 The case for multiple, not infinite, truths
6.2 State and science and the dialectic of master and servant
Notes
7 Clashing logics
7.1 Knowledge, logics and cultures
7.2 The logic of science
7.3 The logic of politics
7.4 The logic of bureaucracy
7.5 How the logics intersect and clash
7.6 Impure philosophy
7.7 Is all science political?
7.8 Future synthetic logics
Notes
Part IV The Problem of Institutions: Solving the Science–Politics Paradox
8 Split sovereignty, or the role of knowledge in corroding the supremacy of politics
8.1 The nature of sovereignty
8.2 Plural sovereignty – knowledge, ecology and future generations
Notes
9 Democracy meets science
9.1 Scientists advising politics: the role of the ‘science triangle’
9.2 Brokers and intermediaries
9.3 Integrating science and politics through iteration and experiment
9.4 How to guide what you don’t understand: the principle of triangulation
9.5 Democracy shaping science
9.6 Democratizing the priorities of science: science for society
9.7 Choosing pathways
9.8 Slowing productivity and stagnation: science’s social contract
9.9 The public as makers of science
Notes
10 The flawed reasoning of democracy and its remedies
10.1 Politics protecting science from politics
10.2 Skilled publics: shaping a public able to exercise sovereignty
10.3 Skilled politics: the case for academies for politicians
10.4 Knowledge commons, superpolitics and science assemblies
10.5 Knowledge commons for metacognition
Notes
Part V The Problem of Scales: Borderless Science in a World of Borders
11 The clash between global and national interest
11.1 The evolutionary dynamics of competition and cooperation
11.2 Global imbalances and the struggle between hope and fear
11.3 ‘Changes not seen in a century’
11.4 Governance deserts
Notes
12 Governing global science and technology
12.1 The persistence of the idea of global government
12.2 The right metaphors: governance as a network not a single command centre
12.3 Science and the Sustainable Development Goals
12.4 A new economic base for global bodies: taxing global public goods
12.5 Global democracy and legitimation
12.6 A renewed United Nations founded on knowledge
Notes
Part VI The Problems of Meaning: Synthesis, Wisdom and Judgement
13 Science, synthesis and metacognition
13.1 Types of synthesis
13.2 Science and wisdom
13.3 Science and judgement: how to map and measure what counts as good science and good technology
Notes
14 The dialectics of what is and what matters
Index
End User License Agreement
Cover
Table of Contents
Title Page
Copyright
Acknowledgements
Introduction: The science–politics paradox
Begin Reading
Index
End User License Agreement
iii
iv
xi
xii
1
2
3
4
5
6
7
8
9
10
11
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
43
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
87
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
171
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
257
258
259
260
261
262
263
264
265
266
Geoff Mulgan
polity
Copyright © Geoff Mulgan 2024
The right of Geoff Mulgan to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2024 by Polity Press
Polity Press65 Bridge StreetCambridge CB2 1UR, UK
Polity Press111 River StreetHoboken, NJ 07030, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-5308-2
A catalogue record for this book is available from the British Library.
Library of Congress Control Number: 2023937258
The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website:politybooks.com
I’ve benefited from the insights of very many thinkers and doers in writing this book – too many to mention all of them – and from a career that has taken me between the worlds of government and science, universities and practical action. I may have accumulated some prejudices along the way, but I hope the breadth of my background has helped me to see patterns that weren’t so obvious to others within a single field.
While writing the book I tried some of the ideas and arguments out with many colleagues, both in governments and in science, including through my role at UCL Science, Technology and Public Policy (STEaPP); through engagement with the Geneva Science Diplomacy Accelerator (GESDA), the OECD’s science and technology team; government science advisers such as Sir Peter Gluckman, Maarja Kuusma and Joe Biden’s science adviser Eric Lander; Tatjana Buklijas and Kristiann Allen at the International Network of Government Science Advisers; Anja Kaspersen (ex UN now at IEEE); Effy Vayena at ETH in Zurich; Helen Pearson at Nature, who commissioned a piece on synthesis which prompted part of the book; the team at Science Advice for Policy by European Academies (SAPEA); David Mair and others at the European Commission Joint Research Centre; many colleagues at the OECD, UNESCO and UNDP (including through the STRINGs programme on steering research for the global goals, which fed into the chapters on international options); James Wilsdon and the ‘Research on Research Institute’ programme; Robert Doubleday and others at the Centre for Science and Policy (CSAP), in Cambridge. I benefited from useful inputs from colleagues, including Professors Jon Agar, Arthur Peterson, J.C. Mauduit, Jo Chataway and Chris Tyler, and PhD students Basil Mahfouz and Alex Klein. I am also particularly grateful to Jonathan Skerrett at Polity for commissioning the book and for very useful feedback. And, naturally, I have drawn on the writings of many others whom I have never met, and gained from working with others who are no longer alive, including the sometimes rivalrous Bruno Latour and Pierre Bourdieu, and the UK science adviser Sir Robert May.
None of those mentioned above are, of course, in any way responsible for any errors of fact or judgement in this book and, though I hope they would all agree with its main argument, I doubt that any will agree with all of it. My hope is that it will promote the kind of debate – and healthy friction – which lies at the heart of the best science and the best politics.
Many of the world’s most prosperous cities – including London, Hamburg and Rome – have within them a research centre studying what are called ‘BSL4’ organisms. These are dangerous organisms, which pose a ‘high risk of life-threatening disease, aerosol-transmitted lab infections, or related agents with unknown risk of transmission’. The Labs that study them are also labelled as BSL4. They are familiar from the evening news as places full of people wearing what look like space suits, and they often show up in Hollywood films.
It’s estimated that there are some 69 of these either in operation or construction worldwide. Most are located in urban areas. Most of the people living nearby, which probably includes many readers of this sentence, are unaware of them. Most might be shocked if they knew what was being done within them, often with only weak oversight and regulation,1 and would wonder why they are located in highly populated cities, like the one in Wuhan which some thought might lie behind the COVID-19 outbreak.2
Much of their work is necessary. But they are making judgements about risk that have huge significance for everyone else, for example when they experiment with combining viruses, or amplifying their harm or transmissibility. Yet there are no global agreements on how they should be run, no official registries of where they are, what they do, or how safe they are.
This is one of many examples of a gulf that has grown up between science, which is often necessary and inspiring but also often opaque and secretive, and public interest or public dialogue, or what some would call common sense. They are examples of the challenge every society has of exercising power over knowledge and reminders that while science may save us, whether from diseases and natural threats like asteroids, it may also kill us, whether with a nuclear Armageddon, invented pathogens, or wayward intelligence.
The ambiguities of science and technology are evident in the impacts of the many technologies that have seen exponential improvements in recent years. The processing power of computers has roughly doubled every two years in line with Moore’s law. The cost of sequencing the full human genome fell from around $100 million two decades ago to $100 today, while the cost of solar powered electricity fell almost 90 per cent in the 2010s alone.3
These advances happened alongside stagnant incomes for many in countries such as the US, stagnant wellbeing, declines in social connectedness and mental health, as well as worsening global ecological indicators and signs of potential systems collapse.
Young people reflect these paradoxes: most are very positive about science and technology, more than their parents’ generation, and technology plays a big part in their lives. According to one survey of 20,000 young people across the world, 84 per cent of them say that technical advancements make them hopeful for the future. But this enthusiasm for science combines with pessimism. In sixteen out of twenty countries, more young people believed the world was becoming a worse place to live than believed it was becoming better, with many of the reasons cited being indirect effects of science, from workplace automation and climate change to the harmful effects of social media.
Science is the most extraordinary collective achievement of the human species – a set of methods, mindsets, theories and discoveries that have changed every part of our lives. But these paradoxical patterns show that a powerful method for amplifying human intelligence is not always so intelligently directed.
So you might expect that the question of how to govern science, how to mobilize its benefits but avoid its risks, not just in biohazard labs but also in everything from artificial intelligence and food systems to space warfare, would be one of the most important questions of our times.
But how? Scientists have long argued that they should be given the maximum freedom to explore and discover. While there are good arguments for this, particularly in more fundamental science, it becomes ever less plausible the closer science and technology come to daily life. Although scientists are typically intelligent, thoughtful and decent people, it’s not obvious that they can be trusted to govern science, any more than the military can be put in charge of wars. Science alone can be tunnel-visioned: it needs other perspectives to show how to avoid harm.
This is where institutions come in. Our societies are made predictable and manageable through institutions. It’s through institutions that a public interest comes to be refracted. Within nations, an array of funders, regulators, agencies, commissions and parliamentary groups try to steer science and innovation. But there are glaring governance gaps – gaps where institutions are needed, whether in relation to AI and cybersecurity or synthetic biology. Moreover, public influence over science has declined in recent years as the proportion of R&D dominated by big firms has grown. Amazon, for example, spent $40bn on R&D in 2020, more than all but a handful of countries (the UK’s public R&D budget that year was around $14bn, Finland’s around $2.3bn).4
These gaps are even more evident at a global level where there is little effective governance. We have a World Bank, an International Monetary Fund and numerous other funds and development agencies. But the world lacks institutions charged with reflecting on science, its aims and its methods, or judging whether the world’s research capacity is directed to the right tasks and with the right methods. Scientists have achieved many breakthroughs in global cooperation, often below the radar of formal politics – from the rules of the Internet to the management of Antarctica and nuclear non-proliferation. Yet the best available data show a striking lack of alignment between what the world has decided are its top priorities – summarized in the Sustainable Development Goals – and the priorities of science and technology.5
Scientists are periodically involved in public calls to guide, constrain or rein in powerful technologies, particularly artificial intelligence. But, as I show later, these are usually so vague, and so devoid of any plan of action or any language for thinking about governance, that they have little impact.
In the nineteenth century, as constitutional monarchy became the norm in much of Europe, it was said that monarchs now reigned, but didn’t govern. Science is now in an opposite position. It governs but doesn’t reign and is only loosely accountable for the power it exercises.
Some of the reasons for this lie in the blind spots and biases of science itself. Scientists say they can’t make policy, that they have neither the skills nor the inclination to do so, and they fear being tarnished if they get too close to the grubby, compromised world of actual government. Yet their de facto power makes this position of detachment increasingly implausible.
Politics should be the answer, since it is the main way we make collective decisions. But politics looks ill-suited to the task of governing science. The dominant forms of modern politics were shaped in the nineteenth century: rule by representatives concentrated in parliaments in capital cities, with periodic elections, manifestos and programmes. There has been relatively little advance since then, despite many experiments on the periphery (from citizen assemblies and deliberations to virtual parliaments). Instead, politics often looks petty, short-term, half-informed, or irrelevant. The ways in which politicians are recruited and promoted don’t fit well with the tasks they have to fulfil and their roles are almost unique in being so unsupported by professional training – most learn on the job.
I’ve sometimes played a slightly mean trick on senior figures in politics.6 I ask them if they could give a five-minute talk on how the Internet works. Almost none can, though they use it for many hours a day. They know next to nothing about how it functions, or about the material reality of undersea cables and switches or the organization of addresses and protocols.7 Essentially for them it is magic, which perhaps helps explain why governments and parliaments found it so hard to respond intelligently as the Internet transformed so many areas of life, both for better and for worse.
So, for politics to play the roles that only politics can play we need a radically reformed politics. This is what I call the ‘science–politics paradox’: only politics can govern and guide science in the public interest, but politics has to change to be able to do this: to become more knowledgeable, more systematic in its methods, and some of the time, more scientific, benefiting from what I call the ‘new curriculum for power’ that encompasses data and systems, complexity and psychology as well as politicians’ more traditional grounding in law and economics.
I use Hegel’s story of the master and the servant as a way to make sense of this dynamic. Politics, the putative master, has nurtured a servant who now greatly outstrips the master in terms of capability and knowledge. Science has gained a de facto sovereignty of its own, that sits along the traditional sovereignty of politics: the servant has to some extent become a master. Most of our collective decisions now involve science – from pandemics to climate change, the upbringing of children to clean air – and that collective knowledge now makes a claim that complements the claims of votes, or the desires of citizens.
But science has little to say on how we make these collective judgements, and little to say about questions of meaning, or wisdom. It can tell us what is, and what might happen, but it can’t tell us what matters, or what we should care about. For that we need politics, in its broadest sense. The idea that governments can simply ‘follow the science’ quickly falls apart on inspection.
Here I draw on Aristotle. He distinguished between ethics, which concerns what is a good life for an individual, and politics, which is concerned with the good life for a community. He believed that the health of the polis was essential to the full realization of human potential.8 He saw political science as a master discipline, an ‘architectonic’, which sits above the other disciplines, and, he wrote in the Nicomachean Ethics that ‘since political science uses the rest of the sciences, and since, again, it legislates as to what we are to do and what we are to abstain from, the end of this science must include those of the other sciences, so that its end must be the good for humanity’.9 Two thousand years later, politics can still override all other fields and disciplines through its power to make laws.10
In doing this it draws on ethics but goes broader.11 Indeed, most of the decisions to be made about science and technology go far beyond ethical reasoning: they involve very political judgements about who benefits and loses and they are highly contextual. The fashion for creating centres around the ethics of science (from biosciences to AI) is an understandable response to the failures of politics, and often produces intelligent commentary. But I suspect in the future it will be seen as a category error. Ethics alone cannot tell us how to design a system of welfare, when to fight wars, how to tax or how to police, or how to guide powerful new fields of science.
It is a political question, not an ethical one, whether power needs to be mobilized to block, accelerate or guide science and technology, with the authority to inspect, analyse and assert rules and laws. It is a political question whether power needs to be mobilized to influence how technology is itself enabling new forms of power, such as monopoly, predation or abuse. And it is a political question whether power needs to be mobilized to distribute the benefits of new knowledge (such as genetic enhancement).
During the course of the book, I dive into the nature of these relationships with power. I describe the complex history of state involvement in science; how states saw science as the means to military prowess or economic prosperity; the rising concern with risk; and the practical problems faced by governments and parliaments grappling with science advice. I describe the clashing logics of science, politics and bureaucracy and the ways in which these logics have a life of their own.
My conclusion is that we need a simultaneous scientization of politics and a politicization of science, reinventing both, so as to cultivate sciences that are reflexive and self-aware of their own limits and politics that are sufficiently well informed to guide processes that are often opaque, uncertain and hard to grasp.
This is not an argument for scientists becoming partisan. Quite the opposite. The more scientists appear parti-pris, using their authority as scientists to endorse views that have nothing to do with science, the less they will be trusted.12 The more they appear closed and narrow in their thinking, the less reason we will all have to take their conclusions seriously. For me, the politicization of science is more about scientists taking responsibility for the state of their society and being willing to engage in argument and debate about its priorities. It is about acknowledging that many of the most important decisions to be made about science are essentially political.
The heart of my case is an argument for metacognition. Metacognition is the crucial skill schools teach children: a skill of thinking about how to think, and knowing what the appropriate ways of thinking are for different tasks.13 In science and technology there are often very strong systems for cognition but only weak ones for metacognition, for reflecting in a rounded way on complex choices. This kind of metacognition amplifies the spirit of science, the commitment to exploration and doubt, but it sometimes challenges the practice. It also amplifies the best of democratic politics – a willingness to engage with other people and other views – and challenges its tendency to become narrow.
That requires a skill in looped rather than linear thinking, since multi-dimensional knowledge – knowledge that encompasses ethics, politics and much more – ultimately has to take precedence over the less dimensional knowledge of individual scientific disciplines. To handle a complex task like managing a pandemic, averting climate change, banning rogue AIs or fighting a war requires multiple types of knowledge, of which scientific knowledge is only one, and not always the most important. It’s necessary, in other words, to zoom out before zooming back in.
The mark of a mature political system, I argue, is that it has many different ways of mobilizing knowledge, suitable for tasks with varying degrees of technical and moral complexity, varying links to the daily life of citizens, and varying degrees of uncertainty, and that it can explain why different ones are used for different purposes.
Metacognition underpins synthesis. Science has extraordinarily strong tools for analysis and discovery. But it has surprisingly weak methods for thinking across boundaries or for synthesis. This became very apparent during the pandemic as some scientists became very powerful – but could not articulate how they would weigh up physical health against mental health, the needs of the economy or education. They could be hugely impressive within their domains, for example modelling the risks of transmission, or accelerating the development of vaccines, but were oddly inarticulate across domains. Yet many of our big challenges – from the complexities of shifting towards a hydrogen-based economy to population mental health – require exactly this kind of synthetic or holistic thought and action.
Science often governs itself. When it doesn’t, it is still much more often directed to the interests of states or of corporations than it is to the wider public interest. This is one field that has yet to democratize its governance, to ensure that common knowledge serves common interests. As I show, there is a wide gap between what the public say they want science to focus on and where brainpower is actually directed. This may be why, in the UK for example, a majority say that R&D does not benefit them.14
To better serve the public we also need better anticipation. Science produces new knowledge but also new risks and so, for any society, and for the world as a whole, the ability to understand, spot, anticipate and prevent is vitally important. Yet in most fields we have only weak institutions to do this. The IPCC attempts to anticipate trends in climate change – but is interesting in part because it is such an exception, with no equivalents in fields like artificial intelligence.
Anticipation in conditions of uncertainty points to actions that need to be different in nature from traditional laws and programmes. Precisely because of the uncertainties surrounding science they have to be revisable decisions – decisions that include clarity on the triggers, or new facts, which would require a change to the decision. Regulation needs to become more ‘anticipatory’ – able to anticipate technological change and to shift quickly in the light of its actual patterns, whether in relation to drones or quantum computing, genomics or self-driving cars.15 And, rather than one-off inquiries and commissions, we need more permanent, continuous assessment of benefits and risks: what I call ‘science and technology assemblies’, which bring together experts, politicians and the public at every level, from cities to nations to the world, and are supported by well-curated knowledge commons. These need to be both political – in the narrow sense of engaging with interests and values in the present – but also super-political, in the sense of seeking to take account of the interests of future generations.
These arguments about the ‘how’ of government and governance reflect the shifting nature of truth. States rest for their legitimacy on claims about truth, sometimes arbitrary and sometimes accurate. Science too aspires to the discovery of truths. Yet we are in a time when truth can seem slippery, when innumerable fakes, false images, videos and misleading claims proliferate, with deception becoming cheaper and more commonplace, making it harder than ever to know what to believe. In this context infrastructures of verification become even more important, and professions with a vocation for proof and truth become ever more socially vital. The basic methods of science – which involve scepticism and rigorous method to get closer to truths – matter not just to the work of science itself but to almost everything else. Indeed, the original motto of the Royal Society, founded in London in the seventeenth century – nullius in verba – is even more relevant in an age of wars over truth (the motto essentially means: don’t take anyone’s word; instead, test, interrogate, probe).
This – the wider value of science – makes it even more important that scientists engage. I argue for a ‘relational turn’: that science needs to work harder not just at explaining, but also at listening, responding, and opening up to democratic input. It is still common to hear scientists talk as if communication was enough to ensure trust. This is wrong. Scientists will be respected for their expertise but, in the long run, they will only be fully trusted if they are seen to care about the interests of the public.
The core insight of politics is that it is only through expression, argument and competition that we discover and express common interests. The core insight of science is that it is only through detached observation, experiment and scepticism that we discover useful truths. And the core insight of bureaucracy is that it is only through creating institutions, roles and rules that we make things happen.
This book makes the case for fusing these insights into a new generation of institutions to shape science, supported by new logics, that can help whole societies think together about their choices and implications, from data and evidence to imaginative speculation. Their task is to think and act synthetically, connecting the four stages that are vital for any governance of science and technology: analysis and observation; assessment and interpretation; action using the full range of possible tools, from laws and regulations to funds; and then adaptation in the light of what happens.
In emphasizing action and learning I take a different view from that of many writers on science, particularly in the field of science and technology studies, some of whom have opted for detachment: observing and analysing but steering clear of prescription. I suggest that their stance is a symptom of a more widespread ‘dynaphobia’: a fear that any engagement with power will be corrupting (fear that mirrors the excessive ‘dynaphilia’ of many politicians and officials, who perhaps love power too much and knowledge too little).
Many academics avoid making proposals, designing options and advocating for them, preferring to stay in the safer space of observation and critique. The result is a deficit of designs that becomes very obvious when, for example, societies need better ways of governing technologies such as artificial intelligence. At a time when we badly need useful options for the synthesis of science, politics and ethics, whether in government or business, many of those with the deepest knowledge of the issues are mute.
The job of creating and implementing such designs is a truly political task. With any emerging field of science and technology, a society has to decide whether to encourage it or discourage it; to fund or defund it; to establish new rules or institutions to guide it or to opt for benign neglect. These decisions are partly fractal, made not just in laws but also in the conscience of individual scientists, the managements of firms or foundations, shaped by media and movements. But many of the most important decisions at some point return to politics.
1.
Activity in these labs is regulated by quite detailed biosafety requirements for handling different organisms. However, the nature and implementation of these rules is very uneven.
2.
See for example: this parliamentary debate on the location of such facilities, which includes mention that the ‘USGAO has concluded that evidence is lacking that such research can be done safely on the mainland [of the USA]. It cites the outbreak at Pirbright as the best evidence that an island location is preferable …’:
https://publications.parliament.uk/pa/cm200708/cmselect/cmdius/360/360i.pdf
3.
For a comprehensive survey of exponential technologies see Azeem Azhar,
Exponential: How Accelerating Technology is Leaving Us Behind and What to Do About It
(Random House Business, 2021).
4.
The OECD publishes extensive data on R&D: this is their table for government spending:
https://stats.oecd.org/Index.aspx?DataSetCode=GBARD_NABS2007
5.
See: the STRINGs report, 2022, which is discussed in more detail later on. Tommaso Ciarli et al., ‘Changing Directions: Steering Science, Technology and Innovation towards the Sustainable Development Goals’:
https://doi.org/10.20919/FSOF1258
6.
I’ve done the same exercise with business leaders and students too: surprisingly few do any better than the politicians.
7.
I would sometimes then recommend they read books like Andrew Bloom’s
The Tubes: Behind the Scenes at the Internet
(Penguin, 2013).
8.
It is perhaps a symptom of the hollowing out of politics that ethics is often used today in a much wider sense to encompass much of what is political.
9.
Nicomachean Ethics
, I.2 (1094 b 4–7).
10.
Though, as I show later, politics needs external pressures and restraints as much as any other field: the principle of non-self-sufficiency applies to politics as much as science.
11.
Alongside Aristotle’s distinction between the good life of the individual and the community, others argue that ethics concerns universal principles, such as the golden rule, whereas morality is embedded in particular communities.
12.
An interesting recent article showed this clearly, using the example of the journal
Nature
’s endorsement of Joe Biden:
https://www.nature.com/articles/s41562-023-01537-5
. It showed that the endorsement did little to increase Biden’s support but ‘reduced Trump supporters’ trust in scientists in general’.
13.
‘Metacognition is cognition over cognition: the set of higher order cognitive systems that monitor our mental processes … [supervising] our learning, evaluating what we know and don’t know, whether we are wrong or not …’ Stanislas Dehaene,
How We Learn: The New Science of Education and the Brain
(Penguin, 2020), p. 193.
14.
https://www.sciencecampaign.org.uk/app/uploads/2023/02/CaSE-Public-Opinion-February-2023-Trends-report.pdf
15.
I set out a framework for ‘anticipatory regulation’ in a piece in 2017:
https://www.nesta.org.uk/blog/anticipatory-regulation-10-ways-governments-can-better-keep-up-with-fast-changing-industries/
. This approach was subsequently turned into new programmes in several countries, such as the ‘Regulatory Pioneers Fund’ run by the UK government.
Science is all around us and, if we look into the future, its significance is only set to grow. Science shapes our health and fuels new technologies increasingly integrated into our bodies, our homes and our cities. It illuminates the cosmic context of our lives and reveals the minutest details of life. It is a source of wonder, inspiration and awe.
Now, unlike our predecessors, we often can’t help but see things through a scientific lens. A patch of parched grass may be seen as the result of climate change. Unruly children may be interpreted through the lens of the science of parenting. A fast-food shop may be looked at through what we know of nutrition or obesity. In all these ways scientific reasoning connects to, and sometimes displaces, other lenses: land seen primarily through the lens of belonging; food through the lens of pleasure and gratification; children through the lens of belief or the sanctity of the family.
But, to the extent that science has grown, so too has it become more dangerous. Some still hold to a view of science as cool, calm and ordered, with the quiet hum of laboratories and people in white coats bringing sanity and rationality where once there was chaos, capriciousness and violence.
However, this picture is not accurate. New knowledge is unsettling and destabilizing. It answers some questions but generates new ones. Ever more of the risks we face are the results, either direct or indirect, of scientific progress, from the development of a carbon-based industrial civilization to nuclear and biological weapons, rampant artificial intelligence to genetically modified organisms. New knowledge destroys old jobs as well as old sources of authority. It reveals new areas of ignorance and creates new anxieties and justified fears. This is the paradox of science that it is both ever more vital and ever more dangerous.
This is obvious when we look at the countless threats that originate in science, from pathogens to pollutants. The Commission for the Human Future, in 2022, for example, highlighted ten potentially catastrophic threats to human survival, a list similar to many other ones. These are (in no particular order):
Decline
of natural resources, particularly water.
Collapse
of ecosystems and loss of biodiversity.
Human population growth
beyond Earth’s carrying capacity.
Global warming
and human-induced climate change.
Chemical pollution
of the Earth system, including the atmosphere and oceans.
Rising
food insecurity
and failing nutritional quality.
Nuclear weapons
and other weapons of mass destruction.
Pandemics
of new and untreatable disease.
The advent of
powerful, uncontrolled new technology
.
National and global
failure to understand and act preventatively
on these risks.
Most are the direct or indirect results of a science and technology based civilization, and all are amplified by the last risk listed here. These together make the case for new arrangements and institutions on the cusp of science and politics to better avoid ‘national and global failure’. But how? And how can they collaborate?
Both science and politics have very long roots but have taken distinctive forms in the modern world: politics in the form of states, parties, parliaments and programmes, science in the form of disciplines, labs, methods for experiments or peer review. Both promote knowledge and effective action through a mix of competition and cooperation. Both rely heavily on words and prose. Both use a surprisingly similar structure to link thought and action. They observe – what’s happening and what matters. They interpret. And they act.
Yet there is a fundamental difference in how they think. Politics is infinitely flexible. There is no such thing as a ‘political truth’. All that matters is what works, for now, often with a very short time horizon. Science by contrast is dogmatic in the sense that it has a rigid view of what methods are acceptable and which are not (though its dogma is that there are no dogmas – everything is open to questioning). It polices its frontiers to root out heresies and falsehoods. It can take the long view, and its methods of analysis are deep, but also linear.
The mentality of science is by its nature sceptical and cold. Indeed, this is its greatest strength. Faced with any claim, it asks us to question, prod, doubt and interrogate. This is what distinguishes it from myth or narrative. It helps us to get closer to truths of all kinds. But it gives us little comfort. The mentality of politics is very different. It seeks to reassure and make sense, while representing, channelling and reflecting our collective needs and desires, rooted in time and space and in the lives we live. Science has relatively little to say about what matters, though it can warn and encourage. Politics has relatively little to say about facts, though it needs facts to guide its diagnoses and prescriptions.
Both are radically incomplete. Science achieves most of its impact in the world through the addition of engineering, which has a very different logic and way of working (though increasingly science and engineering are interwoven, for example in the frontiers of artificial intelligence). It has to combine with other kinds of reasoning – ethical, political, pragmatic – to make any judgements about what to do: science alone cannot tell us what counts as gender in sports or whether nuclear power is a good answer to climate change. It is a vital input but, in order to be useful for action, it has to accept its place alongside other types of knowledge that are just as relevant.
Politics too is incomplete. It should be a synthetic practice, drawing on many kinds of knowledge and aware of its own deficiencies and blind spots. But often it narrows down to a caricature. Its nature is to be flexible but this can become a pathology, without regard for facts, consistency or practicality. Politics that is only politics serves the public poorly. And so politics only works when it combines with other ways of knowing and doing.
Science and politics need each other to survive. Science needs the patronage of politics, politics needs the solutions of science. But they also compete for authority, for resources and for recognition. Their uneasy symbiosis casts a new light on old dilemmas. For two thousand years political philosophers have debated whether superior knowledge, or backing from other citizens, provides a sounder basis for legitimacy.1 We experience a similar tension in our own lives – the tension between what we know and what we do, feel or identify with. It is a rare person who acts straightforwardly on the basis of knowledge, whether in diet and fitness, relationships, voting choices or career choices. Instead, we struggle with the tension between what we know and what we are.
In politics the tension between the idea of legitimacy based on expertise, and legitimacy based on the expression of civic will, is equally unavoidable. No government can be entirely ‘evidence-based’; none can defer all decisions to scientists and experts; and none can ignore the moods, hopes and fears of the public.
Yet no government can be entirely driven by public desires either, since these will be incoherent and inconsistent, and no public would itself wisely choose to be driven by its own choices. Again, there is a parallel with our own individual lives: most of us put in constraints, commitments and arrangements that protect us from our own unstable volition.
In relation to politics, however, it is hard to articulate precisely what the limits of popular sovereignty should be. We prefer the myth of public wisdom: the claim that on balance, publics will tend to make the right choices, rather than the more accurate perception that there is at best a loose correlation between wishes and outcomes. This was well described by the Israeli political scientist Yaron Ezrahi as ‘an unsettling empty dark space at the foundation of political order’. That dark space has become increasingly unstable as science has grown and as politics has, at times, reacted aggressively to the challenge this implies.
In the case of politicians like Donald Trump and Vladimir Putin, their advisers and ideologists, there is no embarrassment in creating their own facts, their own universes of meaning, and speaking contemptuously of science when it doesn’t serve them. These are the more straightforward cases, where the world of myth hits the world of science. But just as common are much less easy cases, where facts and science, values and politics, intermingle without it being so obvious who is on the side of virtue.
An intimation of the possibly uncomfortable future relationship between science and politics could be seen on 20 July 2021 when Dr Anthony Fauci, Chief Medical Adviser to the US President, appeared before a Senate hearing. He was there to challenge the claim that the US government had funded research in Wuhan that could have led to the leak of the pathogen that caused the COVID-19 pandemic. In response to a question from one of the Senators he replied: ‘I totally resent the lie you are now propagating.’2
Such moments, when cool science hits hot politics, have become ever more common. In this particular case, it remains unclear whether COVID-19 was indeed the result of scientific research that then leaked (though it looks more likely that it came from animals in the Wuhan market, with raccoon dogs passing it on from bats). Eminent figures could be found on both sides of the argument.3 But what was not in doubt was that the US NIH had funded experiments. It was certain that it had financed activity in Wuhan that involved coronaviruses, with several organizations funded to do what’s called ‘gain-of-function research’ (which attempted to increase the virulence or harms of viruses), constrained only by weak provisions that funders should be informed if there were dramatic results. And it was certain that actions taken to allay suspicions – with key players who were implicated in the problems being recruited to investigate them and key evidence suppressed – instead fuelled suspicions.
This incident – which concerned a pandemic that caused some twenty million deaths globally – remains murky.4 But it highlighted many uncomfortable issues, including the risky patterns of some research, which pushes back the boundaries of human knowledge but can appear to lack much wisdom or common sense.
The US Congress concluded that they had lost control, during a year which should have been one of unmitigated triumph for science, having created and distributed a series of effective vaccines at extraordinary speed. Some ambitious politicians saw an opportunity: Ron DeSantis, Governor of Florida, called in late 2022 for a grand jury to investigate ‘any and all wrongdoing’ with respect to COVID-19 vaccines, signalling his intention to be even more sceptical of science than President Trump (only 17% of conservative Republicans now report having a lot of trust in scientists, compared to 67% of liberal Democrats).5 Here are signs of just how much science had become simultaneously indispensable but also problematic.
That same year a Wellcome Trust poll found that 80 per cent of people from 113 countries said they trusted science either ‘a lot’ or ‘some’, a level of support that other fields can only envy. But that success, too, masked uncomfortable patterns. 44 per cent of Americans do not believe that human activity is causing climate change,6 while in South Africa fewer than a fifth believe that it is.7 Shortly after the pandemic I walked down my local high street (in Luton, a medium sized town in England) to see a series of stalls (some Christian, some Islamic) explaining that COVID was a punishment from God for various sins committed and proclaiming that science had failed and only religion could answer the true questions of life. Some shared the opinion of 40 per cent of Americans that we are living in ‘end times’.
What mattered to them was a world away from what seemed obvious to scientists, though it was paradoxical that quite smart science was playing its own role in undermining science (it’s estimated that nearly half of the Twitter accounts spreading messages on the social media platform about the pandemic are likely to have been bots).8
The arguments about COVID and vaccines were particularly intense examples of the evolving struggle between science and both its enemies and its sceptical friends. But they were hardly unique. During the same period the European Parliament debated new laws on artificial intelligence (I was on one of its advisory committees, part of STOA – Science and Technology Options Assessment). AI is extraordinary, impressive, part of our daily life and also part of our daily dreams and nightmares. As we will see, many of the scientists at the heart of it have tried to define ethical rules and limits to their own work, though with only limited success.
Politicians struggled to know what they should do. On the one hand they were told that AI was vital to the future prosperity of their continent, which was already slipping behind the US and China on the frontiers of computer science. On the other hand, they could see how manipulative and dangerous AI could be to their citizens. As a compromise they proposed to ban outright a range of algorithms that were deemed high risk and further pushed the principle that algorithms should be transparent and explainable. Some argued these were quite impossible to implement, as algorithms, and AI using neural nets, became ever more opaque and complex.
Yet the political pressures to regulate AI were unavoidable. In 2020 thousands had marched in London against an algorithm that had determined school exam grades. In the Netherlands an AI algorithm had determined incorrect payments for social security to thousands, causing much misery (and forcing the government to resign). During the same period – 2021 and 2022 – China introduced a clutch of new rules on AI, establishing ethical principles and limits, including the first citylevel legislation in Shenzen in late 2022 and the first provincial rules in Shanghai the same month, all trying to establish different categories of risk and constraint.
Here, too, science was everywhere but, again, also problematic, pushing issues onto the political agenda that politicians struggled to understand but which scientists also lacked the intellectual tools to judge. Look closely and this is now a normal situation not an exception. Dozens if not hundreds of science assessments are underway in any country at any time. They may be estimating the cost of nuclear waste and deciding who should be responsible – since if it’s the industry, investment might stop – or trying to assess how much to hope for fusion, which always appears just thirty years from fruition, or whether to adjust the rules for gene-editing.
Some judgements are about when and how to accelerate technologies – like quantum computing – to ensure that nations play a part in coming economic booms. Others are about when and how to slow them down, as when decision-makers realize that, while quantum computing could drive new industries, it could also undermine privacy on the Internet, demolishing the cryptography on which cloud computing and messaging systems like WhatsApp depend, as well as having potentially catastrophic implications for national security.9
Science as threat; science as failure; science as prompt for new rules. All are now everyday dimensions of science and all are highly political in every sense of the word. In the words of Peter Gluckman, former chief science adviser to the Prime Minister of New Zealand, a host of problems now requires decisions that are simultaneously scientific and political, including ‘eradication of exogenous pests […], offshore oil prospecting, legalization of recreational psychotropic drugs, water quality, family violence, obesity, teenage morbidity and suicide, the ageing population, the prioritization of early childhood education, reduction of agricultural greenhouse gases, and balancing economic growth and environmental sustainability’.10
Yet our current arrangements to manage the boundaries of science and politics are no longer adequate. They neither acknowledge the growing sovereignty of science – its own claims to authority and legitimacy – nor its weakness, which is a lack of capacity for synthesis, integrating scientific insights with other types of knowledge and grasping what truly matters. An ever-reducing proportion of political decisions makes no use of science, and, conversely, an ever-reducing proportion of scientific decisions is untouched by politics.11
But how should we use and govern science? How should a society use the best available knowledge to guide it? Should we worry more about out-of-control scientists or democratic politicians with little knowledge?
There are surprisingly few good examples of a society engaging in a thoughtful, rounded way with a challenging set of technologies. One is the UK’s engagement with human fertilization, and the challenges of ‘test-tube babies’, IVF, stem cells and cloning. Over several decades, from the 1980s onwards, there was sober and open public debate about the issues, with an extraordinary level of both public and parliamentary interest. A regulator was established in 1990 – the Human Fertilization and Embryology Authority – which made a series of rulings that made oftencutting edge research possible, while also retaining public confidence.
But what is most striking about the HFEA is that it remains the exception rather than the rule.12 It is rare in having translated the work of an advisory committee into a body with real power; rare in that it helped shape, and then be empowered by, a political and parliamentary consensus that marginalized its many enemies; and rare in that it retained public confidence. There was nothing comparable for the Internet, even as evidence mounted about its harms; nothing comparable for genetic modification or for the many other fields where technology was advancing at an extraordinary pace; nothing comparable for artificial intelligence, despite decades of hand-wringing.
Models such as the HFEA are useful prompts, even if they cannot be precisely replicated in other fields such as the metaverse or synthetic biology.13 Institutions have to fit their context, and countries with polarized politics or strong religious institutions have to handle science in very different ways to ones that are consensual and predominantly secular.
We live in a very uneven world in other respects too, which limits the scope for standardized solutions. The US contributes roughly six times as much to greenhouse gases as the whole of Africa, whose population is four times greater. Countries differ by a factor of 42 in their neo-natal mortality rate; a factor of ten in the share of population with access to electricity; a factor of 50 in the share of population with access to the Internet; a factor of 2,500 in the number of scientific and technical journal articles per 1,000 population; and a factor of more than 1,000 in per capita energy related CO2 emissions.14
The world is also uneven in its experiences of science, and far away from the standard story of a linear progression. Instead, eras are jumbled; old technologies reappear alongside new ones. Wood, once seen as primitive, is now a material of choice, for example for twenty-storey buildings in Scandinavia.15 Most contemporary wars are fairly low-tech: the US was driven out of Afghanistan by Kalashnikovs and makeshift bombs, not missiles. Bicycles are the transport technology of choice in the world’s richest cities, while the poorest ones improvise with homes made of corrugated iron, DIY water and electricity and mobile phones. Our aircraft typically were designed sixty years ago; our cars predominantly use internal combustion engines whose main designs date back 150 years. It’s not surprising that perspectives vary and that generalizations mislead.
But, even in countries that are much more takers than shapers of science, there is no avoiding the need for politics to make decisions. The Internet is a good example. In its first decades it was largely seen as an opportunity (for prosperity or new ways of running public services): for most countries the only question was how to achieve more access to it and more services on it. Then belatedly it came to be seen as a threat (to childhood, morality and more) and in much of the world only thirty years after it became of part of everyday life were the first attempts made at more detailed regulation, for example of its effects on children’s lives or privacy through requiring ‘age-appropriate design’.16
Many believed that this offshoot of science could exist free from politics. Even though it originated in the military of the world’s supreme superpower, and even though its central governing body, ICANN, was literally owned by the US Department of Commerce, it was hoped that this new infrastructure could be a vessel for pure freedom, entirely separate from states, politics or government. In the words of John Perry Barlow in the famous Declaration of Independence of Cyberspace in 1996, ‘on behalf of the future I ask you of the past to leave us alone … you have no moral right to rule us’.
His motives were benign. But as Lawrence Lessig argued, ‘liberty in cyberspace will not come from the absence of the state. Liberty there, as anywhere, will come from a state of a certain kind. We build a world where freedom can flourish not by removing from society any self-conscious control, but by setting it in a place where a particular kind of self-conscious control survives. We build liberty as our founders did, by setting society upon a certain constitution.’17
Many countries designed rules for the Internet that were a very long way from freedom, choosing instead to block, coerce and ban. But Lessig’s fundamental point was right, and the fact that it took so long for politics to wake up says much about the gap between fast-moving technology and often sluggish governance.
The pattern was repeated a generation later with artificial intelligence: in the 2010s a flood of national strategies was published, promising to promote AI, and use it for economic ends, alongside another flood of haphazard attempts at self-regulation by the scientists involved. Only very belatedly, at the end of the 2010s, when AI was already built into many of the devices used daily by billions of citizens, did attention turn to the need for new rules and institutions to govern it with the kind of ‘self-conscious control’ that Lawrence Lessig advocated. Once again politics came in late, ambivalent about the facts and uncertain about what if anything it should do.