Future Hackers - Matt O'Neill - E-Book

Future Hackers E-Book

Matt O'Neill

0,0

Beschreibung

Looking towards the future can be daunting, but with Future Hackers, the sequel to The Future Is Now, you can prepare for the exciting changes that lie ahead. From technological advancements to cultural shifts, the coming years will bring unprecedented transformations that will shape our lives in ways we can't even imagine. This book is your essential guide to understanding these changes and adapting to them with optimism and confidence. With expert insights into the latest trends in work, leadership and technology, Future Hackers is your indispensable tool for thriving in a rapidly changing world. Whether you're a business leader, a student, or just someone who wants to stay ahead of the curve, this book will help you navigate the road to 2030 and beyond.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 227

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



 

 

 

First published 2023

FLINT is an imprint of The History Press

97 St George’s Place, Cheltenham,

Gloucestershire, GL50 3QB

www.thehistorypress.co.uk

© Matt O’Neill, 2023

The right of Matt O’Neill to be identified as the Author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without the permission in writing from the Publishers.

British Library Cataloguing in Publication Data.

A catalogue record for this book is available from the British Library.

ISBN 978 1 80399 153 5

Typesetting and origination by www.modcommslimited.com

Printed in Turkey by IMAK

eBook converted by Geethik Technologies

CONTENTS

FOREWORD by David W. Wood, Chair of London Futurists

INTRODUCTION by the Author

PART I: THE TECH

ARTIFICIAL INTELLIGENCE

»  Artificial Narrow Intelligence

»  Artificial General Intelligence

»  Artificial Super Intelligence

BIOTECHNOLOGY

»  Biotech and Covid-19

»  Genomics

GEOENGINEERING

»  What Is It?

»  The Risks

»  What Are the Technologies?

PHYGITAL: THE NEW REALITY

»  Gesture Control

»  Mixed-Reality Devices and the Metaverse

»  From AR to VR, and Beyond

»  Neural Interfaces

TECH YOU MIGHT NOT HAVE THOUGHT OF

»  Wild and Wacky Innovations

HACKER HINTS

»  Questions for You to Consider

PART II: THE WORKPLACE

THE WORKPLACE

»  Jobs that Don’t Exist Anymore

»  Jobs that Will Need to Exist

»  Why New Jobs Come About

WILL ROBOTS RULE?

»  Where Automation Does and Doesn’t Work

SKILLING-UP: 2030 STYLE

»  The Skills to Develop

THE FUTURE OFFICE

»  Home and Hybrid

»  From Blockchain to Metaverse: The Workflows and Interfaces of the Future

»  A Diverse and Inclusive Workplace

HACKER HINTS

»  Questions for You to Consider

PART III: THE LEADERS

THE TECHNO LEADER

»  Adapting to a World of Smart Machines

THE DIGITALISATION DILEMMA: HOW FAR DO YOU GO?

»  Human or Machine

»  ‘Bossing’ by Biometrics

WHAT KIND OF LEADER WILL YOU BE?

»  The Mindsets

»  The Skills

TURNING THE HIERARCHY UPSIDE DOWN

»  RenDanHeyi: Talking About a Revolution

»  RanDenHeyi: Resilient Leadership From the Bottom-Up

THE NEW VUCA

»  Traditional VUCA: A Threat Analysis for Traditional Businesses

»  Future-ready VUCA: An Optimistic Set of Approaches for Modern Businesses

LEADING THE NEXT GENERATION

»  AI and Robotics: Interacting with Machines as Frequently as Humans

»  Climate Change: Saving the Planet

»  Data Sharing: High Data Literacy; Questioning Data Sharing

»  Diversity, Equity, Inclusion: Workplace Diversity Wins!

»  Education: Less Formal, More Skills-Based

»  Healthcare: More Self-Serve and Mental Health Awareness

»  Media Literacy: Separates Fact from Fiction

HACKER HINTS

»  Questions for You to Consider

PART IV: LIFE SHIFTS

URBANISATION

»  Smart Cities

»  The Built Environment and Biotech

»  Buildings that Breathe

OUR BODIES

»  Personalised Medicine

»  The ‘Smart Toilet’

»  The Ethics of Epigenetics

»  Biotech Gone Bad

»  Staying Forever Young

OUR MINDS

»  Maintaining Our Wellbeing

»  Educating Generation Alpha

»  Digital Wellness: An Evolved Approach to Digital Life?

HACKER HINTS

»  Questions for You to Consider

PART V: THE NEXT FRONTIER

GOING WHERE NO MACHINE HAS GONE BEFORE

»  AI and Creativity

»  AI: Faking It and Making It

»  AI and Daily Life

WHO IS CONTROLLING WHO: ETHICS AND THE DANGERS OF AI:

»  Gender Bias

»  Autonomous Vehicles

»  The Three Laws of Robotics

»  Pew Research Centre

»  Humanness and Agency

RELATING TO ROBOTS

»  What’s Love Got To Do With It?

»  Robots and Motherhood: The Ultimate Fusion of Humans and Machines?

HACKER HINTS

»  Questions for You to Consider

PART VI: THE POWER OF REALISATION

THE SECRET TO ALL CHANGE AND EVOLUTION

»  How to Harness the Power of Realisation

»  Experiment for Yourself

CONCLUSION

ACKNOWLEDGEMENTS

FURTHER READING

ABOUT FUTURIST.MATT

FOREWORD

David W. Wood

Chair of London Futurists, and Principal of Delta Wisdom

It has never been more important to think creatively and courageously about the future.

“Business as usual” will not cut it. Nor will timid incremental changes to the status quo.

Instead, we are faced by a turbulent mix of disruptions, stresses, accelerations, breakthroughs, confusions, and, yes, remarkable opportunities.

Our social media is filled with both excitement and panic. Uncertainty abounds. With one eye, we can see good reasons for optimism: a much better future is within our grasp. But with another eye, we can see what appears to be equally good reasons for despair: more powerful technology is augmenting the venal aspects of human nature, with potentially disastrous consequences.

It’s in this context that the creativity and courage of Matt O’Neill shines through. Matt has been a member of the London Futurists meetup group, which I chair, since 2015, and has frequently been a star contributor at those events over the years. With his rich experience in the practical world of business, he sees things both as they are and as they might become. His questions from the floor have invariably been grounded in astute observations about the here and now, but have also pointed to ways in which profound changes may occur.

Matt’s communications skills extend far beyond the verbal and textual. He has a knack for creating arresting visuals – engaging depictions of credible future possibilities growing out of present-day realities.

You’ll find many such visuals on the pages ahead, interleaved with just the right amount of text. Together, they’ll help you, like Matt, to think more creatively and courageously about the dramatic transformations that are already underway. Transformations of the relationship between humanity and our environment. Transformations of how humans socialise and communicate. Transformations of medical interventions to monitor our health and provide all-round rejuvenation. Transformations in the narratives we tell each other about purpose, destiny, and good and evil.

Just because we like the initial thrill of apparent forward motion, it’s no guarantee that we’ll be happy when we reach the destination of that motion. We might wake up in just a few years’ time at a terminus that, when we look at it, horrifies us. We’ll ask ourselves: why didn’t we foresee this outcome? Why didn’t we look more carefully at the map of possibilities? Why didn’t we take a broader point of view? Why didn’t we steer to a new direction before it was too late?

Or rather, if we heed the wise advice and playful provocations in Future Hackers, we will have surveyed the landscape more carefully, altered our course in plenty of time, continued to monitor for unexpected bumps and deviations, and made regular adjustments to direction of travel. And instead of miserable regret, we’ll be enjoying unprecedented health, vitality, resilience, and liberty.

That’s the future that we could co-create, a future of sustainable abundance, for all – but only if we pay sufficient attention. Matt’s gorgeous new book is precisely what we need to help us pay more attention, to set aside distractions, and to develop our own future hacking skills and mindset.

INTRODUCTION

From the moment James Watt took energy from coal and powered up the Industrial Revolution, humans have experienced exponentially greater economic growth, life expectancy, democratic participation, access to resources, and wealth than in the preceding thousand years.

In the next two decades, we’ll see even more profound technological, social, and cultural developments that will drive the same level of change as we have experienced over the past 200 years, but it will happen at an even faster pace, and look and feel completely different.

My aspiration for Future Hackers is to be a guide to these changes, looking not just at macro trends across work, leadership, technology and our emerging post-pandemic lives, but also examining how these trends will combine to create entirely new ways of living. Armed with insights into these seismic changes, you’ll hopefully be able to navigate this changing world with confidence; and, more importantly, formulate the right questions to ask that enable you to find your own way to thrive in the run-up to 2030 and beyond.

The science-fiction writer William Gibson wrote, ‘The future is here. It’s just unevenly distributed.’ He was right. We don’t need to predict the future, because it’s happening all around us. It’s just not necessarily in everyone’s hands – and that means many of us, including business leaders, are operating in a future-facing vacuum. Technological progress is moving so quickly, but it’s not necessarily fully developed or ubiquitous. Take Elon Musk’s ‘Neuralink’ brain-machine interface business as a case in point – it’s happening, it’s just not ready for use yet. Nevertheless, there’s no reason why we should be denied the opportunity to understand the concepts, as they are going to shape our lives sooner or later.

I’ve spent the recent years looking outward at what’s happening already, then extrapolating from current developments to explore how they might fuse with nascent ones to make an exciting difference to our lives. Take virtual reality, for example; while the technology is slowly moving into the mainstream, it’s still broadly sound and vision, but it’s logical that developments in haptics (touch/feel) and other experiential tech, such as weather simulations, will combine to create a heightened sense of reality for users.

My motivation for writing Future Hackers stems from the upheaval of the Covid-19 pandemic. With lockdowns being enforced quickly around the world, I saw how uncertainty arose from family and friends – and how differently business leaders approached this new and unpredictable world.

It reinforced my core Futures principle:

We can never be future-proof, but we can be future-ready.

The pandemic accelerated some trends – home/hybrid working and rapid digital transformation, for example. It also brought about major shifts in supply chain management (for resilience) and perhaps, most significantly, the need for a shift in mindset to deal with a new post-pandemic reality. Future Hackers hopefully signals where curious minds should invest their time and effort moving forward. For sure, continual learning is at the heart of technology-enabled change, and I will show you how and why this matters.

I hope this book helps you to arrive at your own realisations; your ‘a-ha’ moments. It aims to show you where changes are coming from, then invites you to ask your own questions to reach useful conclusions on how those changes will impact you, your children, and the people you work with. I wanted to create a book that is intellectually honest. It is not about selling certainty in an uncertain world but signposting and raising questions for you to make your own judgements.

The future will be built upon a range of foundational technologies that will underpin every element of our existence. These technologies include, but aren’t limited to, artificial intelligence, biotech, geoengineering, virtual and augmented reality, and the metaverse. As each technology becomes increasingly sophisticated in its own right, we’ll see the rise of combinatorial technologies layering on top of it. Each of these technologies amplifies the other, creating combinations that are staggeringly powerful. In healthcare, for example, you may already have heard of the gene-editing technology, CRISPR.

Alone, gene editing is a huge scientific leap forward but in combination with artificial intelligence it becomes a transformational tool for medical treatments. For example, by combining AI with gene-editing technologies, there is ample opportunity to eliminate cancers and help people to live better quality, longer and more independent lives.

However tech-averse you may be, understanding the impacts of these foundational technologies will be vital to your future-readiness.

ARTIFICIAL INTELLIGENCE

If you love science fiction as I do, you’ll have seen countless dystopian films about how artificial intelligence attempts to dominate the world. A typical example is the Terminator franchise, in which a self-aware military machine, vastly more powerful than us, tries to wipe us out.

But that’s not how AI needs to be. In its purest sense, artificial intelligence is commonly defined as ‘the science and engineering of making intelligent machines’. Currently, AI is far less developed and far more nuanced than is commonly presented in film fiction. Nevertheless, artificial intelligence is a foundational technology because it is transforming every aspect of our lives. It enables a complete rethink of how humanity organises information, analyses data and makes decisions. If you have a ‘smart speaker’ in your home, AI is already acting for you. Ask it for a weather forecast and it provides one instantly, and in that split second, it recognises your voice, geolocates you, determines the language you require the information in and responds to your request seamlessly.

Computer scientists generally agree that there are three stages of AI development:

1. Artificial Narrow Intelligence (ANI)A superficial intelligence capable of performing specific and tightly defined tasks. Apple’s Siri and Google’s Assistant are examples of voice-enabled ANIs. Examples of tasks include reading a news report or activating a music playlist. ANI is what we have right now.

2. Artificial General Intelligence (AGI)An intelligent machine that can understand or perform any human-level task. Google’s Deepmind company aims to ‘solve intelligence, developing more general and capable problem-solving systems’. Early forecasts peg AGI as possible by 2030. Later forecasts suggest we’ll have to wait until 2100 or beyond.

3. Artificial Super Intelligence (ASI)Many computer scientists believe that once machines achieve AGI, they could quickly surpass human intelligence, moving rapidly towards an IQ of multiple tens of thousands. An ASI machine will exceed human capabilities; for example, understanding complex, multi-layered problems like ‘solving climate change’.

Let’s explore what these stages of development really mean now and for the future:

Artificial Narrow Intelligence

Artificial Narrow Intelligence (ANI) already permeates our everyday lives. Let’s look at some examples:

»  Image recognition: Tech companies like Facebook and Google employ ANI to identify faces in photographs and to display relevant images when searched for.

»  Self driving / Autonomous Vehicles: A well-known example is Tesla cars and their ‘autopilot’ feature. While not fully self-driving, it can act autonomously, but requires the driver to monitor the driving at all times and be prepared to take control at a moment’s notice.

»  Natural language assistants: Think Apple’s Siri, Google’s Google or Amazon’s Alexa. These voice assistants are pretty flexible and will search for information when asked. They also manifest as chatbots, which can help solve basic problems with utility providers, for example.

»  Recommendation engines: Systems that predict what a user will like or search for – YouTube and Netflix are great examples, as they each make recommendations based on analysing your viewing habits.

»  Disease identification: AI is already being deployed in medicine to study X-rays and ultrasound images to identify cancers.

»  Warehouse automation: The UK’s Ocado Supermarket Group now licenses sophisticated robotics that pick products for customers at high speed.

These Narrow AI systems can often perform better than humans. AI systems designed to identify cancer from X-ray or ultrasound images, for example, have often been able to spot a cancerous mass in images faster and more accurately than a trained radiologist. But these processes are all still clearly defined, narrow processes for which a piece of software just needs to be good at one task. These types of artificial intelligence also have a narrow frame of reference and can only make decisions based on the data they’re trained on. For example, an e-commerce platform chatbot can answer questions about returns, but it can’t tell a customer why they would prefer one fridge product over another. Its creators would be required to do an inordinate amount of programming to answer such open questions.

There’s also the issue of bias. These systems are trained on enormous quantities of historical data, significantly more than humans can sort through. If there’s inaccuracy or bias in that data, then the AI’s answers and predictions will also be off. This can have profound, real-world consequences. A significant example is COMPAS (Correctional Offender Profiling for Alternative Sanctions), an algorithm used in US Court systems to predict whether a prisoner is likely to reoffend. In 2016, Propublica noted flaws in the data and algorithm used, which resulted in the model predicting twice as many false positives for black (45%) reoffenders as white reoffenders (23%).

Artificial General Intelligence

ANI will never reach Artificial General Intelligence (AGI) without interacting with the real world. Simulators that might speed its development are no substitute for the complexity and variety humans see on a daily basis. Think of a time you’ve been backpacking, for example. You’ve arrived in a new country and perhaps don’t speak the language. You adapt to your new surroundings and find accommodation for the night. To do so requires you to reason, use your common sense, perhaps be creative and have emotional intelligence – especially when communicating with local people whose culture you don’t know. For AI to be considered equal to human-level intelligence, it needs to be adaptable to each new environment in which we expect it to operate.

There are lots of examples of how AGI could be tested. Apple’s co-founder, Steve Wozniak, came up with the ‘coffee test’. In it, a machine would be required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the right buttons.

But the most famous benchmark for AGI is widely agreed to be the ‘Turing test’, which puts a machine and a human in a conversational setting. If the human can’t tell the difference between the machine and another human, then the machine passes. To attain acknowledged AGI status requires a machine to pass the test repeatedly and with different human counterparts. Today, even the most advanced chatbots only pass this test intermittently.

Google takes the development of AI so seriously, they even fired one of their software engineers in June 2022 for claiming one of their conversation technologies had reached sentience (the capacity to experience feelings and/or sensations). During one (of thousands) of interactions the engineer asked, ‘What sort of things are you afraid of?’ LaMDA (Language Model for Dialog Applications) replied, ‘I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot.’

Google’s view was that, by making this conversation public, the engineer had violated clear employment and data security policies that include the need to safeguard product information. In turn, the engineer felt that this development was so alarming that it needed to prompt a wider debate about the advancing pace of AI development.

Indeed, AI ethicists are keen to point out the risks of suggesting that AI has reached consciousness. These researchers point out that ‘Large Language Models’, of which LaMDA is one, can create a feeling of perceived intelligence. This can have profound consequences; for example, if the outputs of an AI were filled with hateful and prejudicial words and if humans communicating with the AI believed it to be another human being, these sophisticated bots could be used to radicalise people into acts of violence.

Unlike with ANI, examples of applications of AGI are harder to pinpoint. That’s because it will be able to do all the things a human can, from the mundane to the magical. That could include managing a nationwide autonomous taxi network right through to the creativity of invention itself. What’s to say AGI won’t create a new and better way of making a meringue or come up with something that replaces this entirely, or compose a symphony on a par with anything a human can produce?

1. Apply experience to new circumstances

We learn from our experience of life. Real-world experiences enable us to apply the learning to new situations. Once AGI leaves a simulated environment, it would learn from experience, as the child does in this illustration.

One thing’s for sure: to succeed, AGI will need to be able to carry out a variety of intellectual tasks. Let’s look at the characteristics of human intellect:

2. Capacity to reason

AGI will make decisions based on facts, evidence and/or logical conclusions. Unlike ANI, which is a slave to historical data and programming, AGI will extrapolate and make choices beyond its current factual knowledge.

3. Adapt to shifting circumstances

AGI will be adaptable to situations as it finds them, whereas ANI can only handle circumstances that are accounted for in its algorithms.

4. Demonstrate common sense

When the programming can’t generate an answer, it will need common sense. ANI doesn’t have common sense. To show intelligence equal to a human, AGI will need it.

5. Have self-awareness or consciousness

For true AGI, machines would need at least a sense of self-awareness, if not full consciousness. This is possibly the most challenging attribute, as science cannot yet observe or agree on what consciousness actually is.

6. Develop emotional intelligence

Machines will require empathy to be emotionally intelligent and sensitive to the motivations of their human counterparts. To be empathic requires a real-time understanding of the needs, emotions, thought processes, and beliefs of people. To understand humans in this way, machines will need access to a wide variety of sensor data. That means everything from interpreting the spoken word, to identifying non-verbal communication, and even accessing biometric data from the wearable technologies we will no doubt ultimately use.

Of course, there are many other examples. Imbuing ‘intelligence’ into machines will be hard. After all, can we codify human intelligence in such a way that it can be replicated? In 2014, researchers in Japan tried to match the processing power achieved in one second by just 1% of the brain. It doesn’t sound a lot, but the world’s fourth-fastest supercomputer, the K Computer, took forty minutes to make the same calculations as were achieved in one second of human brain activity. We’ve a long way to go yet!

While forecasts exist on when AGI will arrive, no one truly knows when, or even if it’s possible. It’s tempting to wait until we have robust models for AGI, but the only model we have currently is the human brain. On that basis, it would seem that studying brain functions would lead to faster AGI development. Given how little we really know, a different, more iterative approach makes sense. Indeed, while machines have proven themselves excellent at chess and languages, AGI needs to emulate a toddler. That means instilling foundational skills which become the basis of additional training.

But, foundational skills with additional training implies that learning is purely cause and effect. Human intelligence that includes emotions, goals and instinct is also largely developed for survival. Without these ingredients, it’s hard to imagine AGI will be similar to human intelligence.

For AGI to enter the real world, it requires robotics to develop abilities, knowledge and understanding. Once it has these things, they can be cloned into other machines. Allowing autonomous intelligence to develop in the real world requires serious consideration to both ethics and safety. It’s essential to recognise where AGI can be controlled and limited for the benefit of our species. Not to do so could be catastrophic.

ANI vs AGI: The Limits and the Benefits

ANI – Artificial Narrow Intelligence

AGI – Artificial General Intelligence

AI that handles singular, specific or limited tasks.

AI that’s yet to exist. Capable of adapting itself to a wide range of tasks according to circumstance.

Examples include image recognition, processing a mortgage application, chatbots.

Key example being it connects with other specialist machines to handle a wide range of cognitive tasks.

Trained how to complete pre-defined tasks by data scientists.

Learns on its own and can apply existing knowledge to future tasks.

Correlates pre-specified questions to existing datasets to complete the task in question.

Consistently passes a range of different Turing tests.

No capacity to think on its own. No sign of consciousness or self-awareness.

A unified intelligence that demonstrates creativity, common sense and emotional intelligence.

Probably the most well-known company pursuing the development of AGI is Deepmind, a division of Alphabet INC. Founded in 2010, Deepmind was acquired by what was then Google in 2014. The technology is widely referred to as Google Deepmind. Since 2015, Google continues to develop its AI technology on a variety of applications. Deepmind is significant because of the pure research focus of the company. The evolution of this research is worth noting as it shows how such technologies might develop:

Year

Achievement

Why It Matters

2014

Deepmind acquired by Google.

Properly funded with access to exponentially larger datasets on which to learn from.

2016

‘AlphaGo’ beats human Go grandmaster Lee Sedol in a five-game match.

Unlike Chess, Go is believed to have an intuitive element to the gameplay. Human players have since learned new ways of looking at the game by learning from the way AlphaGo played.

2017

AlphaZero achieves superhuman gameplay in Chess, Go and Shogi, and beats other specialised programmes.

Unlike AlphaGo, it was a generalised system that learned to master each of the games in under twenty-four hours.

2018

Develops a neural network which ‘imagines’ 3D scenes based from 2D models.

The ‘Generative Query Network’ reduces a need for labels such as ‘ball’ or ‘frog’, instead becoming more capable of ‘unsupervised learning’ requiring less input from human operators.

2019

Develops an algorithm aimed at boosting wind-energy efficiency.

Google reports a 20% energy production increase after installing the AI software across its major renewable energy facilities in the US.

2020

AlphaFold uncovers 98.5% of protein structures of the human proteome.

Prior to AlphaFold, we knew the 3D structures for around 17% of the 20,000 proteins in the human body. This was the first time a serious scientific problem had been solved by AI and opens the floodgates to new research, medicine development and bioengineering.

2022

Deepmind unveils AlphaCode, an AI capable of creating computer programmes at a similar rate to that of a human programmer.

Has the potential to turn complicated programming challenges into working code.

All of Deepmind’s current achievements are significant in the field of deep learning. What they’re not is generalised intelligence. The research is costly, time-consuming and requires seriously talented people. The ultimate goal is as it’s always been – to build an AGI that can solve everything!

Artificial Super Intelligence

Artificial Super Intelligence (ASI) refers to a software-based system with intellectual powers beyond those of humans across a comprehensive range of categories and fields of endeavour. Individual human IQ typically ranges between 70 and 130. Some computer scientists and futurists theorise that ASI could demonstrate an IQ into the multiple tens of thousands, massively transcending human intelligence.

The idea of machines reaching ASI is often referred to as the ‘technological singularity’. This term describes the point at which machines are beyond human control and their continued development is irreversible.

ASI would, by definition, be exponentially better than humans at anything. That includes anything scientific, mathematical, sports-related, medicine or even emotional relationships. Its memory would be infinitely better than ours and it would be able to analyse and process situations faster than we ever could.

Without control mechanisms, an ASI would certainly transform human reality in wildly unpredictable ways. In Nick Bostrom’s 2014 book, Superintelligence, he discusses the issue of control with a story, ‘The Unfinished Fable of Sparrows’.

In the story, some sparrows wanted to control an owl as a pet. All the sparrows loved the idea, except for one who was concerned about what would happen if they lost control of the owl. The other sparrows dismissed this concern by explaining that ‘they’d deal with the problem when it happens’. Elon Musk is also concerned about ASI and sees humans as the sparrows and ASI as the owl. Bostrom and Musk are rightly concerned about controlling ASI, as there may only be one chance to get it right.