Living with the Algorithm: Servant or Master? - Tim Clement-Jones - E-Book

Living with the Algorithm: Servant or Master? E-Book

Tim Clement-Jones

0,0

Beschreibung

An authoritative guide to what is needed for AI governance and regulation from expert authors internationally involved in the practical world of AI. This book tackles the question of why AI is a distinct challenge from other technologies and how we should seek to implement innovation-friendly approaches to regulation. It sets out many of the risks to be considered, why regulation is needed, and the form this should take to promote international convergence on AI governance and the responsible deployment of AI. This is a highly readable prescription for AI governance and regulation designed to encourage the technological goals of humanity whilst ensuring that potential risks are mitigated or prevented and, most importantly, that AI remains our servant and does not become our master.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 327

Veröffentlichungsjahr: 2024

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



3

LIVING WITH THEALGORITHM: SERVANT OR MASTER?

AI GOVERNANCE AND POLICY FOR THE FUTURE

Tim Clement-Jones

With the assistance ofCoran Darling

4
5

Contents

Title PageEpigraphPreface About the Author 1Introduction – The AI Narrative 2AI Risks: What Are They and How Do We Assess Them? Risk identification and assessmentAI and its risksThe peculiarities of frontier AINext steps3Digital Dividend or Deficit? Threats to Democracy and Freedom of SpeechAI, disinformation and the threat to democracyThe impact of generative AI4Public Sector Adoption: Live Facial Recognition, Lethal Autonomous Weapons, and Ethical Use Automated decision-making and frontier AI adoption in the public sectorLive facial recognitionPostscript: LFR in schoolsAutonomous weapons systems5AI and IP Rewarding Human Creativity IP in training materialCan AI create a copyrighted work?US Copyright and AIPatents and AIPerforming rights and AIThe future of AI and IP66Digital Skills, Digital Literacy, Digital Exclusion, and the Future of Work The digital futureDigital skills for the futureDigital literacyDigital exclusion and data povertyNew employment rights7The Case for Ethics-Oriented Governance and Regulation Setting out the principles for AI development and adoptionLegal AI liability accountability and redressCorporate governance and AIThe corporate challenge of generative AIThe opportunity for changeNext steps8The Role of Regulation: Patchwork Quilt or Fishing Net? Embedding ethical principles through regulationThe EU approachAI governance in the USEU and US common ground?AI governance in the UKAutomated decision-making and the GDPRThe role of international standardsTackling Goliath: AI and competition9AI the Global Opportunity: Race or Regulation? 10Concluding Thoughts for a Technological Tomorrow PostscriptEndnotes Index Copyright

7

‘ We should regulate AI before it regulates us’

Yuval Noah Harari1

Preface

Over the past few years, we have begun to see an emerging divergence across the world in how countries, governments, and organisations are approaching the development and deployment of artificial intelligence, often fuelled by strongly held views on whether the technology poses a systemic risk to humanity as a whole or not. A cursory glance at public initiatives such as the development of targeted regulation, and international events and campaigns such as the open letter from AI experts calling for a pause on development of AI in May 2023,2 demonstrates that consensus on approach is far from being achieved and governance of the technology is very much in a fragmented state. One of the most pressing current questions across all sectors and industries is whether society waits until the existential risks of AI become more apparent before implementing tailored measures, or whether we approach AI with more immediate intervention as existing and developing risks are identified.

The intention of this book is to bring together a practical framework which distils many of the key insights from these approaches. I have set out both where I believe material risks continue to pose substantial – yes, and in some cases existential – risks to individuals and organisations and also the form of regulatory intervention which I believe will foster the creativity of developers and innovators, while mitigating many of the current known and future unknown risks posed by AI and AI-leveraged technology.

As someone heavily involved in AI policy for many years, I continue to believe in the importance of law and regulation and effective policy initiatives for ensuring that AI is developed and deployed in ways that offer greater societal benefit and less potential harm. Given the relative novelty of the issue of AI regulation, domestic and international evidence of successful approaches is still far from complete. However, from lessons 8learned through international collaboration and through the regulation of other technologies I am optimistic about the options available to governments and organisations across the world.

I am increasingly convinced that the international ethical and safe development and adoption of AI systems can be secured. It is my hope that governments, regulators, civil servants, and practitioners, whatever the jurisdiction, will accept the challenges and put these constructive proposals into practice.

With that hope comes my great thanks to my wife Jean for putting up with my AI preoccupation over the years, and to colleagues in politics, technology, professional life, and academia who have travelled on this journey with me over the past few years. Writing about AI has always involved the risk of aiming for a moving target and penning this book has been no exception. Particular thanks are due to Coran Darling who has contributed greatly by providing his own expertise and that of his professional colleagues. Any errors or oversimplifications, however, are entirely my own!

 

Tim Clement-Jones January 2024

9

About the Author

Tim Clement-Jones

Tim Clement-Jones was Chair of the House of Lords Select Committee on Artificial Intelligence (2017–18) and is Co-Founder and Co-Chair of the All-Party Parliamentary Group on Artificial Intelligence. He was the initiator and a member of the recent House of Lords Select Committee inquiry into Autonomous Weapon Systems. He was made CBE for political services in 1988 and a life peer in 1998. He is now the Liberal Democrat spokesperson for Science Innovation and Technology in the House of Lords.

Until 2018 Tim was a Partner of the global law firm DLA Piper and its former London Managing Partner. He is now a consultant to the firm on AI policy and regulation. He is Chair of Trust Alliance Group (formerly Ombudsman Services Limited), the not-for-profit, independent ombudsman service that provides dispute resolution for communications and energy utilities, and Chair of the Council of Queen Mary University of London. He is a former consultant to the Council of Europe’s AI Working Group (CAHAI) and is a member of the OECD’s ONE AI Expert Group.

Assisted by Coran Darling

Coran Darling is an international technology practitioner in law, AI, and data analytics. His primary work revolves around helping organisations to navigate the challenges of technology, data, and life sciences. He is a member of the OECD’s ONE AI Expert groups on AI risks and AI Incidents, a member of the Alan Turing Institute’s Data Ethics Group, a founding committee member of the AI Group of the Society for Computers and Law, and a non-parliamentary member of the UK government’s All-Party Parliamentary Groups for AI and Data Analytics. He is a Fellow of the Responsible AI Institute, a member of the European Commission’s AI Alliance, a member of the US National Institute of Standards and Technology’s working group on generative AI, contributor to the US Department of Commerce’s AI Safety Consortium, and an advisor to the British Standards Institution on matters of artificial intelligence and data.

10

1 Introduction – The AI Narrative

Inescapably, for better or worse, as a society we are becoming increasingly conscious of the impact of artificial intelligence (AI) in its many forms. Barely a day goes by now without some reference to AI in the news, whether it is positive and relates to a new technology capable of making everyone’s lives easier – or more negative – and warning of the systematic reduction of employment opportunities, as humans are replaced by automation. With the wide-scale adoption of digital and technological solutions over the past few years, especially as we attempted to minimise the impact of the COVID-19 pandemic, we have all become more aware of the importance of digital media and the impact that AI and algorithms have on our lives.

In December 2022, the United Kingdom’s National AI Strategy3 rightly identified AI as the ‘fastest growing deep technology in the world, with huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life’. Wide-scale changes of this nature, brought about by the development of innovative technologies are, however, by no means a new experience. We need only look to previous industrial revolutions where major societal shifts occurred through the implementation of mechanical, electrical, and computing/automation assisted innovations. Benz began the first commercial production of motor vehicles with an internal combustion engine in 1886. By 1912, the number of vehicles in London exceeded the number of horses. What appears to have caught the world by surprise in the case of AI, with the potential it brings, is the speed and complexity with which it has arrived, forcing us to address many concerns that were previously concepts described in science fiction.

This rapid plunging of the world into a new technological frontier can be likened to the 1970s American television series, Soap. At the beginning of each episode, viewers would be introduced through a recap of the previous episode which would finish by exclaiming: ‘Confused? You won’t be, after this week’s episode.’ Shortly after, the plotline would continue to spiral into new unknowns and even more confusing stories. 11

This is certainly how it sometimes feels when tackling the narrative around AI as it swings back and forward between the extremes of a societal good with the potential to solve humanity’s problems, such as climate change, to the opposite view, where AI is an existential threat to humanity and we should expect an imminent rise of the machines. This is unquestionably made worse by a general lack of public understanding of the technology and an increase in dramatic AI-related media headlines. An early and notable example was the lurid headline in response to the written by the UK’s House of Lords Select Committee on AI which considered the economic, ethical, and social implications of advances in artificial intelligence. In response to our report in 2018 entitled AI in the UK: Ready, willing and able?, we were alarmingly warned:

‘Killer Robots could become a reality unless a moral code is created for AI, peers warn.’4

Famously the late Professor Stephen Hawking warned that the creation of powerful artificial intelligence will be ‘either the best, or the worst thing, ever to happen to humanity’.5

AI is not, however, despite what many headlines would lead us to believe, all doom and gloom. In reality, AI presents opportunities worldwide across a variety of sectors, such as healthcare, education, financial services, marketing, retail, agriculture, energy conservation, public services, smart or connected cities, and regulatory technology itself. The predictive, analytical, and problem-solving nature of AI, and in particular generative AI systems, has the potential to drastically improve performance, research outcomes, productivity, and customer experience.

A notable example of this is the marrying of biotechnology and AI-enabled data analytics in tackling the development of bespoke or ‘precision’ medicines. It has opened up the potential to synthesise, understand, and make use of far greater quantities of health information in the pursuit of treating diseases by creating novel therapies through newly identified compounds and precision medicines.

Regardless of which side of the fence one sits with respect to AI and its potential for benefit or harm, it is increasingly apparent that AI has already – and will to an even greater extent in future – become an integral part 12of everyday life. It brings many opportunities to overcome the challenges of the past, increasing diversity and access to employment for those who are presently unable to work through location or physical disabilities, and streamlining many administrative processes in business that are both costly and time-consuming.

We already have examples of its use in the detection of financial crimes, including fraudulent behaviour and anti-competitive practices, the delivery of personalised education and tutoring, energy conservation, medical care and treatment, and the delivery of large-scale government and non-governmental initiatives, including the United Nations’ pursuit of their sustainable development goals, such as combating climate change, hunger, and poverty.

It is therefore of no surprise that many, including over a thousand technologists from the UK’s Chartered Institute for Information Technology (BCS), asserted in an open letter in 2023 that AI will be a transformative force for good if the right critical decisions about its development and use are made.6

It is equally apparent, however, that AI has the potential for a great many harms to individuals, their rights, and society as a whole. This was recognised directly in March 2023 in a letter signed by several thousand technologists, including those from academia, government, and technology companies themselves, recognising ‘profound risks to society and humanity’ posed by AI and systems with human-competitive intelligence and calling for a temporary halt on technological developments while risks were assessed.7

Later in May of the same year another group of technologists led by the Center for AI Safety, including Dr Geoff Hinton, one of the godfathers of deep neural networks, and several senior leaders behind many of the AI technologies that we see on the market today asserted in a short, concerned statement that ‘mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’. Unsurprisingly, given such existential concerns, Dr Hinton subsequently resigned from his previously held role at Google to ‘speak freely about the dangers of AI’.8

Many of those seeking to draw attention to the potential risks of AI do 13not, however, accept that moratoriums or bans should be put in place as if AI – in particular generative AI – were a form of inhumane technology. Instead, many, including a number of prominent tech executives, believe that a controlled approach should be taken that involves comprehensive regulation with a specific international agency created for the oversight and monitoring of AI developments.

Sam Altman the CEO of OpenAI for example, in giving evidence to the US Congress, rejected the idea of a temporary moratorium on AI development but asked for AI to be regulated. He cited existential risk, and espoused an international agency along the lines of the International Atomic Energy Agency (IAEA) being created to oversee AI development and its risks.9

As a cautious optimist, the author believes that new technology has the potential to offer a great many benefits, including greater productivity and more efficient use of resources. But as highlighted in the title of Stephanie Hare’s book, Technology is Not Neutral,10 we should be clear about the purpose of new technology when we adopt it and about the way in which we intend to adopt it. We need to ask a number of questions: Even if AI can do something, should it? Does it better connect and empower our citizens and improve working life? Does it create a more sustainable society?

A cardinal principle in the development of effective governance of AI should be the requirement that some sort of societal (or organisational) good must come from the implementation of technology. In short, deployment of AI should be guided in such a way that its central purpose is to promote individual or societal benefit, rather than be implemented in a push for automation as an end in itself.

The author’s view is that, as part of the process of adoption, a governance framework should be developed and implemented in a way that encourages transparency and is designed to gain and develop stakeholder trust. The author also believes that we must seek to actively shape AI’s development and utilisation across all stages of its lifecycle – including decommissioning – or risk passively acquiescing to its many predictable consequences.

Even where a clear purpose and benefit are identified, ineffective governance has the potential to cause further concerns. Anyone who has 14read Weapons of Math Destruction by Cathy O’Neil or Hannah Fry’s Hello World: How to be Human in the Age of the Machine11,12 will be only too aware of the impact of algorithms on our lives already and of their implications for vulnerable and disadvantaged individuals and communities.

Ensuring freedom from unintended bias in AI systems and avoiding discriminatory decisions and outputs in relation to particular genders, ages, and ethnicities is essential. Failure to do so risks the discriminatory practices becoming embedded in the deployment of an algorithm and of exacerbating many of the issues they are designed to resolve. For example, concern has arisen in the United States over bias displayed in algorithms responsible for predictive policing and the administration of criminal justice, such as COMPAS,13 a tool used by US courts to assess the likelihood of a defendant becoming a recidivist.

As is explored later, progress (and indeed attitudes towards the desired shape of governance) still varies considerably between governments across the world. The UK government, unlike the EU, is unconvinced by the need to regulate at this juncture and has mainly focused on existential risk, and its proposed approach to regulatory intervention reflects this.

At the time of writing, the EU has elected to take a different route and in December 2023 agreed to a position on their proposed EU Regulation – the ‘AI Act’ – mandating a comprehensive risk-based framework for legislating AI in the market.14 The US has, to date, opted for a hybrid of the two approaches by implementing an Executive Order on the Safe, Secure, and Trustworthy use of AI in the US15 that (among other requirements) mandates obligations to existing federal departments, agencies, and regulators for further action on their part and also through the bipartisan introduction of several AI-specific bills into Congress, such as the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 which could well create a similar set of risk-based protocols for regulation of AI systems.

In spite of the many differing approaches, it does appear that an element of conformity is emerging on international goals for AI governance. In October 2023, shortly after a meeting of digital and tech ministers,16 G7 governments issued a statement on what is called ‘the Hiroshima AI process’ 17 declaring both: 15

We, the Leaders of the Group of Seven (G7), stress the innovative opportunities and transformative potential of advanced Artificial Intelligence (AI) systems, in particular, foundation models and generative AI.

And

We also recognize the need to manage risks and to protect individuals, society, and our shared principles including the rule of law and democratic values, keeping humankind at the center.

It proceeded to endorse a set of Hiroshima Process International Guiding Principles and Code of Conduct for Organizations Developing Advanced AI Systems and ‘instructed acceleration’ of cooperation with and between the Global Partnership on Artificial Intelligence (GPAI) and the Organisation for Economic Co-operation and Development (OECD),18 the former aiming to function as a non-exhaustive set of principles that organisations and governments should consider in the promotion of safe, secure, and trustworthy AI. We outline these later.

The following month the UK held an International AI Safety Summit at Bletchley Park, itself closely connected with one of the founders of AI, Alan Turing, which at its conclusion delivered the so-called Bletchley Declaration, signed by 28 countries plus the EU.19

In essence, the Declaration set out an agenda for future cooperation between countries in the international governance of AI, which included identifying and understanding AI safety risks of shared concern and building risk-based policies to ensure international safety in light of these risks.

Whatever one’s views about the effectiveness of statements such as this there is no doubt that increased international collaboration in tackling AI is needed if an effective means of governing the technology at an organisational, national, and international level needs to be developed.

While the phrase ‘existential risk’ is in our view overly dramatic, my motive for writing this book is nevertheless a shared sense of urgency. AI – and indeed technology as a whole – brings with it challenges and risks that have the potential to impact the rights and safety of individuals and 16organisations across the world. Failure to recognise them poses a threat to the retention of public trust in AI and will undermine much of the work of innovators in demonstrating the many potential benefits of new developments in technology.

As governments and legislators face the challenge of regulating new and developing technologies, some comfort can be taken from the fact that myths, parables, and fiction have prepared us for the impact of the interaction of humans and new technology. We have the example of King Midas of the ancient Greek myth, who much like a naive programmer of today, was too literal in his request, so that everything he touched, his daughter included, turned to gold.

Perhaps we have been prepared by another ancient story, that of Talos the bronze humanoid colossus, reputedly forged by the god Hephaestus, the god of invention and blacksmithing, to protect the island of Crete against invaders. He was encountered by Jason and the Argonauts on their return from stealing the Golden Fleece.20

A classic demonstration that what technology produces, even in complete compliance with commands, may be completely contrary to our actual intentions. More recently in the 20th century Isaac Asimov’s I, Robot stories21 showed us that even when we think we have prepared for multiple outcomes and set rules, technology can deliver unintended consequences.

Despite these narratives, it remains difficult for us to easily frame or fully understand the extent of the threats and opportunities presented by new technologies, such as foundation AI models, general-purpose AI, and biometric data recognition, in particular, where they differ from their less sophisticated predecessors. For example, should we make a distinction between AI used as an initial customer service chatbot and its use in a complex large language model or generative AI program such as those powered by GPT models? If so, how do we actually go about doing that?

The challenges posed by the sheer volume of emerging technologies and novel applications of AI is only matched by their complexity in design and function. Where once we dealt with regulation of rudimentary computational devices and its software, we now have to consider the implications of quantum computers, able to vastly outperform their 17predecessors and perform previously inconceivable tasks. Although we have not yet reached the stage of Artificial General Intelligence (AGI), the potential and perceived creativity of AI continues to grow.

In October 2022, for example, legislators – and the rest of the world – watched as, while sitting beside its human creator, the robot artist Ai-Da appeared before the UK House of Lords to give evidence on the subject of AI, robotics, and the creative arts.22 More recently the large language models GPT-4, Claude, and Bard have demonstrated their abilities as authors. Multimodal AI systems that combine a range of capabilities, language, image recognition and data analysis and that give every appearance of AGI are in active development.

The emergence of these technologies and intelligent systems and the challenges they bring means that, much like Sisyphus eternally pushing up his unrelenting boulder, legislators and regulators have the unenviable job of responding to the continuous pressure of rapidly evolving technology.

Owing to this newfound complexity and superfast evolution, the ability of legislators and regulators to catch up with defining the AI technologies they seek to regulate – as well as responding to the risks posed and opportunities they offer to society – is becoming increasingly difficult. Even more difficult, once they have these parameters pinned down, is the decision as to what extent and through what means we actually go about regulation.

Technology is now at the point where a ‘deploy-and-forget’ approach is no longer viable. As it learns in operation and becomes increasingly autonomous it opens its own form of technology Pandora’s box. This is well illustrated by Brian Christian in The Alignment Problem and Professor Stuart Russell in Human Compatible.23 Each, in their expert ways, warns of the risks in treating AI in the same way as other forms of software and computer programs. Professor Russell, in particular, prescribes building uncertainty into the delivery of objectives of AI systems so that having a human in the loop is not just desirable but necessary as part of effective governance.

Both advocate a form of governance and regulation of AI systems that ensures that potential risks to humanity are mitigated through embedding specific standards which ensure that the AI needs meaningful human input to fully define and accomplish its objectives. 18

In order to build the foundation for a successful method of regulation, many questions need to be answered along the way, including:

Is substitution or augmentation of human potential by machines always ethically or societally appropriate? Should there be an obligation to reserve certain roles specifically for humans or keep a human in the loop, even where technology can provide faster, higher quality, and more cost-efficient results?How should we (indeed, can we) regulate AI systems that actively impersonate human characteristics and replicate individual human identities?What is the most appropriate way of regulating a moving target like AI? Are there risks that are uniform throughout the various classes of AI and technology that can be anticipated, and can they be mitigated at their source by, for instance, common standards of risk assessment?Do international standards offer a solution for areas where domestic regulation falls short? Is reaching agreement on common standards practically possible?

It is the intention of this book to seek to address these questions, and others, in the chapters that follow.

It should be stated at the outset that talk of innovation-friendly regulation is not always helpful and often has the potential to direct regulators down a path that, while well intended, does not accurately achieve the kind of effective governance that they are seeking to implement. Effective and well-tailored regulation, in our view, is about assessing and calibrating risk and providing the necessary guardrails for high-impact outcomes. Innovation is only one of these many outcomes. It can be either desirable or undesirable and is not always an unqualified benefit and certainly should not be the only focus for the legislator or regulator.

Our central theme throughout is that we must find ways, ahead of the development of AGI, of ensuring that AI in its current and future form, for the sake of the future of humanity, is our servant not our master. That, here 19and now, is the challenging and urgent task for policymakers that forms the essence of this book.

The author intends this book to have relevance and applicability across many jurisdictions. While its focus lies primarily within the United Kingdom, insights and developments are drawn from other jurisdictions, including the United States, the European Union, and beyond.

It initially sets out the principal risks encountered during the implementation of AI and AI-powered technology. The remainder of the chapters set out the various challenges that follow interaction between human and machine in different common contexts, the ways in which they can be tackled, and the current approaches taken by the jurisdictions that are, at the time of writing, currently leading the charge in the governance and regulation of AI.

The author has some faith that by regulating for the risks, developing the necessary skills, and cooperating internationally we can succeed in harnessing these technologies for optimum human benefit, but it is by no means a foregone conclusion.

20

2 AI Risks: What Are They and How Do We Assess Them?

A common goal for effective governance of AI to date in the early development of AI governance and regulatory frameworks has been the encouragement of innovation while identifying and mitigating areas of risk.

This chapter seeks to set the scene for this goal by identifying AI risks, how they can be classified and, subsequently, how they can appropriately be addressed through well-considered governance frameworks.

Risk identification and assessment

The manner in which risks are identified and planned for are, it seems, heavily influenced by the political benefits that governments and organisations seek to derive from addressing them. Governments, for example, are likely to be restricted by the timing of election cycles and therefore seek to address risks that can be seen within a typical election cycle – often four or five years. In democratic countries, because they typically have short electoral cycles, there are both cultural and institutional flaws in planning, assessment mitigation, and proactive prevention. The time is never ripe for expenditure on risk prevention and mitigation.

Where there is longer tenure, or outlook, such as those running regulators or agencies or on the boards of organisations, they may have longer timeframes to work with and therefore may look further ahead in their approach to identifying and planning for risks.

Institutions that are seeking to prepare for those risks that are identified (however distant or unlikely) must also consider the overall cost–benefit of acting on potential risks. For example, it is far more likely that a government in a country where water is a scarce resource will benefit from early resource planning and development of regulatory powers that allow for tighter controls on use of water than a government which governs a country with a much lower chance of suffering from droughts and extreme weather events. In similar fashion, it is much more prudent and beneficial for a government 21with advanced deployment of technology to implement measures to ensure its effective and safe use than a country with less prolific use of technology.

The governance of technology presents particular challenges in terms of the approach to identification of risks and subsequent planning. Professor Lord Martin Rees, Co-Founder of the Cambridge Centre for the Study of Existential Risk, identifies the governance balancing act that is required by many political institutions and organisations to play with this type of risk identification and planning in his book On the Future: Prospects for Humanity.

Politicians have incentives to prepare for localized floods, terrorist acts, and other hazards that are more likely to materialise within a given political cycle. But they have less incentive to prepare for events that are unfamiliar and global—even for high consequence/low probability events that are so devastating that one occurrence is too many.24

Whatever justification is given for shorter-term planning across recent government interventions, it is clear that the approach of governments to identification and planning of critical risks has rarely prioritised problems beyond high-visibility short-term risks. In the case of the US, priorities are often associated with what can be easily shown within a presidential or congressional term, which can then be used in a campaign for re-election or to directly attack the policies of their opponent.

Similar short-term risk identification and planning is equally present in the UK. The 2021 House of Lords Special Inquiry into Risk Assessment and Risk Planning,25 concluded that the UK national risk assessment system is heavily deficient in assessing and planning for chronic or long-term risks, and has a bias against low-likelihood, high-impact risks. Perhaps more concerning was the UK government’s inability to address harms caused by rapidly changing and developing technologies, it was discovered that even medium-term risks were often ill-accounted-for without even considering more generational changes such as climate change, pandemics, and large-scale changes to the economy as a result of automation. This view was bluntly summarised by Sir Patrick Vallance, the UK government’s former 22Chief Scientific Adviser in his evidence to the Committee, as ‘If you take a two-year outlook, you get the wrong answer.’26

Paradoxically at the opposite end of the spectrum, there have been advocates justifying short-term thinking. During the taking of oral evidence at a meeting of the Select Committee the then Director of the Cabinet Office’s Civil Contingencies Secretariat (CCS) – which supports the Civil Contingencies Committee, known as COBRA – claimed that:

the shorter the timeframe, the more nuanced a story we can construct about the risk. On longer timeframes, we have a greater degree of uncertainty about the direction the risk takes. This is an important factor, because ultimately the purpose of this is not to make the best possible articulation of what the risk might be; the purpose is to aid planning. Therefore, that greater specificity has benefits for organisations as they are choosing what to focus their planning on.27

The shortcomings of this approach, however, became apparent in the international response by governments to the COVID-19 pandemic in which failure to adequately plan for a major pandemic led to a scrambled, ill-fated attempt to prevent the spread of the virus. This was well illustrated in the report by the Institute for Government report in 2022 Managing Extreme Risks,28 which dispelled any idea that the UK government was well prepared and able to identify and mitigate existential risks beyond those that could be actively addressed in the short term.

There is no doubt, however, that the risks we face are changing, and new risks continue to emerge in ways that were previously relegated to the realms of science fiction. Technological advances have raised the threat posed by the malicious deployment of technologies, which could be used for good or ill, and the ability for control of essential elements of infrastructure to be impacted by malicious action.

At the beginning of his book on government approach to risk planning, Apocalypse How?29 former UK Cabinet Minister Sir Oliver Letwin posits a national emergency in which the internet goes down, electricity supply fails across the country without any contingency put in place and there is no analogue telephone backup available. The failure of critical infrastructure is 23not a fanciful scenario as the experience of Ukraine in 2023 demonstrates, where it was alleged that satellite communications infrastructure was taken offline, rendering drones deployed during the course of battle which relied on them for navigation completely ineffective.30

While it is perhaps a less likely consequence of the deployment of AI, failure to appropriately identify risks and appropriate contingency measures at an early stage undoubtedly could lead to a major future societal crisis.

Former political journalist, now author, Robert Harris giving evidence during the UK House of Lords Inquiry into Risk Assessment and Risk Planning, asserted: ‘Sophisticated societies do collapse. Every civilisation collapses. You cannot think of one that did not face some terrible crisis, partly because they became so sophisticated.’31

In response to many of the lessons learned over the past few years, however, governments have been taking positive, albeit slow, steps towards better planning for risks, technological or otherwise. The UK, for example, has since amended its policy approach to risk to account for risks that fall within a five-year timeframe. However, it is unclear how these medium-term approaches can address the risks that are associated with technology and the ongoing development of AI.

Chronic risks such as large-scale unemployment, chronologically unpredictable risks such as the impacts of artificial general intelligence, low-likelihood risks such as complete shutdown of digital infrastructure, and the most significant (albeit minimally likely) risks such as direct conflict between humans and machines need to be accompanied by a long-term assessment, which should, in the author’s view, be of the order of at least 15 years. After all, that is the minimum period which is expected even for local development plans.32 There is a danger that the current timeframe adopted by most Western democracies for assessing likelihood and impact will lead to misplaced confidence, particularly in terms of the risks posed by new technology – a similar confidence that these democracies had when going into the initial throes of the COVID-19 pandemic.

When it comes to the identification of risks posed by AI and related technologies, several options are available to those charged with the task. There are, for example, many attractions to the classification system of 24Professor Ortwin Renn of Stuttgart University,33 through which risks are classified based on how they need to be managed. He uses a range of indicators to provide a more in-depth representation of the risk, including: extent of damage, probability of occurrence, uncertainty, ubiquity, persistence, reversibility, delayed effect between event and impact, what he calls ‘violation of equity’, and potential for social disorder.

Professor Renn’s system suggests that these risks should be treated differently, both in evaluation of their impact and in management strategies. He distils these criteria into six risk classes, and assigns them names from Greek mythology:

Damocles’ sword: high-impact, low-probability risks such as technological risks from nuclear energy and large-scale chemical facilities.Cyclops: high-impact risks with significant uncertainty in the likelihood assessment, natural events, such as floods and earthquakes.Pythia: risks where the extent of impact, the size of impact, and the likelihood are highly uncertain, e.g. human interventions in ecosystems.Pandora’s box: risks where there is uncertainty in both impact and likelihood, and the damage would be irreversible, persistent, and wide-ranging, e.g. the use of organic pollutants and the impact of some AI systems.Cassandra: risks where the likelihood and impact are both high and relatively well-known but there is delay between the triggering event and the occurrence of damage, leading to low societal concern, e.g. anthropogenic climate change and, it could be argued, artificial general intelligence.Medusa: low probability and low damage risks where there is a large gap between public risk perception and expert risk analysis, e.g. mobile phone usage and electromagnetic fields. 25

Whatever system of identification and assessment is employed it must clearly be implemented in such a way that allows governments and organisations to account for risks that are not only easy to resolve within a short period of time, but that may pose greater, albeit remote, risks to society.

AI and its risks

In order to adequately plan and prepare for the risks of AI, it is necessary to understand exactly what is meant by the term. On this there has been no consensus. Definitions range from sets of techniques aimed at approximating aspects of human or animal cognition using machines (Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap34) to the ability of computer systems to solve problems and to perform tasks that typically require human intelligence (The Final Report of the National Security Commission on AI in the US35).

A common definition, which has been recommended by the International Organization for Standardization (ISO) and others, is:

[An i]nterdisciplinary field, usually regarded as a branch of computer science, dealing with models and systems for the performance of functions generally associated with human intelligence, such as reasoning and learning.36

Then we have the practical definition of AI developed by Brad Smith and Carol Ann Browne in their book Tools and Weapons: The Promise and the Peril of the Digital Age is ‘software that learns from experience’37 – which is wide enough to capture current incarnations of the technology, while accounting for future developments that cannot currently be accounted for.

While helpful in understanding the concept and practice behind AI, the proliferation of definitions has undoubtedly created some confusion. Recent government proposals, including the European Union’s (EU) AI Regulatory framework and the US Executive Order on Safe, Secure, and Trustworthy AI have, however, now adopted the OECD’s revised definition from November 2023 which provides some international consistency:

a machine-based system that, for explicit or implicit objectives, infers,26