Artificial intelligence and education - Wayne Holmes - E-Book

Artificial intelligence and education E-Book

Wayne Holmes

0,0

Beschreibung

Ensuring that AI empowers educators and learners, not over-empowers them, and that future developments and practices are truly for the common good.

Artificial intelligence (Al) is increasingly having an impact on education, bringing opportunities as well as numerous challenges. These observations were noted by the Council of Europe’s Committee of Ministers in 2019 and led to the commissioning of this report, which sets out to examine the connections between Al and education (AI&ED). In particular, the report presents an overview of AI&ED seen through the lens of the Council of Europe values of human rights, democracy and the rule of law; and it provides a critical analysis of the academic evidence and the myths and hype.

The Covid-19 pandemic school shutdowns triggered a rushed adoption of educational technology, which increasingly includes AI-assisted classrooms tools (AIED). This AIED, which by definition is designed to influence child development, also impacts on critical issues such as privacy, agency and human dignity – all of which are yet to be fully explored and addressed. But AI&ED is not only about teaching and learning with AI, but also teaching and learning about AI (AI literacy), addressing both the technological dimension and the often-forgotten human dimension of AI.

The report concludes with a provisional needs analysis – the aim being to stimulate further critical debate by the Council of Europe’s member states and other stakeholders and to ensure that education systems respond both proactively and effectively to the numerous opportunities and challenges introduced by AI&ED.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 195

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



 

ARTIFICIAL INTELLIGENCE

AND EDUCATION

 

A critical view through the lens

of human rights, democracy

and the rule of law

 

 

Wayne Holmes,

Jen Persson,

Irene-Angelica Chounta,

Barbara Wasson

and Vania Dimitrova

 

 

Contents

 

Click here to see the whole table of contents, or go on the « Table of contents » option of your eReader.

Definitions

Adaptive tutoring systems, intelligent tutoring systems (ITS), intelligent interactive learning environments or personalised learning systems (NB some of these terms are contested): AI-driven tools that might provide step-by-step tutorials, practice exercises, scaffolding mechanisms (e.g. recommendations, feedback, suggestions and prompts) and assessments, individualised for each learner, usually through topics in well-defined structured subjects such as mathematics or physics.

AI literacy: Having competencies in both the human and technological dimensions of artificial intelligence, at a level appropriate for the individual (i.e. according to their age and interests).

AI systems: Shorthand term encompassing AI-driven tools, applications, software, networks, etc.

Artificial intelligence (AI): Artificial intelligence is notoriously challenging to define and understand. Accordingly, we offer two complementary definitions:

A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim, for instance, to be able to entrust a machine with complex tasks previously delegated to a human. (Council of Europe 2021)1

Machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context. (UNICEF 2021: 16)2

To further illustrate the range of definitions of artificial intelligence, some alternatives are given in Appendix I.

Artificial intelligence and education (AI & ED): The various connections between AI and education that include what might be called “learning with AI”, “learning about AI” and “preparing for AI”. Learning with AI has also been called “artificial intelligence for education”.3

Artificial intelligence in education (AIED): An academic field of enquiry, established in the 1980s, that primarily researches AI tools to support learning (i.e. learning with AI).

Automatic writing evaluation: AI-driven tools that use natural language and semantic processing to provide automated feedback on writing submitted to the system.

Big data: Large heterogeneous and volatile data sets, generated rapidly from different sources, that are cross-referenced, combined and mined to find patterns and correlations, and to make novel inferences.4 The analysis of big data is too complex for humans to undertake without machine algorithms.

Chatbots: Systems designed to respond automatically to messages through the interpretation of natural language. Typically, these are used to provide support in response to queries (e.g. “Where is my next class?”, “Where can I find information about my assessment?”).

Dialogue-based tutoring systems: AI-driven tools that engage learners in a conversation, typed or spoken, about the topic to be learned.

e-proctoring: The use of AI-driven systems to monitor learners taking examinations with the purpose of detecting fraud and cheating.

Educational data mining: See Learning analytics.

Educators: Shorthand term encompassing teachers and other professionals in formal education and early childhood care, including school psychologists, pedagogues, librarians, teaching assistants and tutors.

Embodied AI and Robotics: Movable machines that perform tasks either automatically or with a degree of autonomy.

Exploratory learning environments: AI-supported tools in which learners are encouraged to actively construct their own knowledge by exploring and manipulating elements of the learning environment. Typically, these systems use AI to provide feedback to support what otherwise can be a challenging approach to learning.

GOFAI: “Good old-fashioned artificial intelligence”, a type of AI more properly known as “symbolic AI” and sometimes “rule-based AI’, which was the dominant paradigm before machine learning (ML) came to prominence.

Intelligent interactive learning environments: See Adaptive tutoring systems.

Intelligent tutoring systems (ITS): See Adaptive tutoring systems.

K12: Children in primary and secondary education (i.e. from kindergarten to kindergarten to the end of secondary schooling.

Learners: Shorthand term to encompass children and young people in formal education (i.e. pupils and students) and people of all ages engaged in formal, informal or non-formal education (in accordance with the principle of lifelong learning).

Learning analytics and Educational data mining: Gathering, analysing and visualising big data, especially as generated by digital devices, about learners and learning processes, with the aim of supporting or enhancing teaching and learning.

Learning network orchestrators: AI-driven tools that enable and support networks of people (e.g. learners and their peers, or learners and teachers, or learners and people from industry) engaged in learning.

Machine learning (ML): A type of AI, the type that is currently dominant, which uses algorithms and statistical models to analyse big data, identify data patterns, draw inferences and adapt, without specific step-by-step instructions.

Natural language processing (NLP) or Speech to text and Natural language generation: Systems that use AI to transcribe, interpret, translate and create text and spoken language.

Personalised learning systems: See Adaptive tutoring systems

Plagiarism checking: AI-driven content scanning tool that helps identify the level of plagiarism in documents such as assignments, reports and articles by comparing a submitted text with existing texts.

Profiling: The automated processing of personal data to analyse or predict aspects of a person’s performance, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.

Robotics: See Embodied AI.

Smart curation of learning materials: The use of AI techniques to automatically identify learning materials (such as open educational resources) and sections of those materials that might be useful for a teacher or learner.

Speech to text: See Natural language processing.

1 www.coe.int/en/web/artificial-intelligence/glossary.

2 www.unicef.org/globalinsight/reports/policy-guidance-ai-children.

3 Recommendation CM/Rec (2019) 10 of the Committee of Ministers to member States on developing and promoting digital citizenship education.

4 www.coe.int/en/web/artificial-intelligence/glossary.

Executive summary

As noted by the Council of Europe’s Committee of Ministers in 2019, artificial intelligence (AI) is increasingly having an impact on education, bringing opportunities as well as numerous threats. It was these observations that led to the commissioning of this report, which sets out to examine the connections between AI and education.

In fact, AI in education (AIED) has already been the subject of numerous international reports (see Appendix III) – so what differentiates this one? There are three unique characteristics. First, in this report, we explore both the application and the teaching of AI in education, which we refer to collectively as “AI and education” (AI & ED). Second, we approach AI & ED through the lens of the Council of Europe’s core values: human rights, democracy and the rule of law. And third, rather than assuming the benefits of AI for education, we take a deliberately critical approach to AI & ED, considering both the opportunities and the challenges. Throughout, the aim is to provide a holistic view to help ensure that AI empowers and not overpowers educators and learners, and that future developments and practices are genuinely for the common good.

The report begins with an introduction to AI (what it is and how it works) and to the connections between AI and education: “learning with AI” (learner-supporting, teacher-supporting and system-supporting AI), using AI to “learn about learning” (sometimes known as learning analytics) and “learning about AI” (repositioned as the human and technological dimensions of AI literacy). In Part II, we examine some key challenges for AI & ED. These include the choice of pedagogy adopted by typical AIED applications, the impact of AIED applications on the developing brain and learner agency, the use of emotion detection and other techniques that might constitute surveillance, digital safeguarding, the ethics of AI & ED, the political and economic drivers of the uptake of AI in educational contexts and AIED colonialism.

We continue, in Part III, by exploring AI & ED through the lens of the Council of Europe’s core values – human rights, democracy and the rule of law – noting that currently there is little substantive relevant literature. Accordingly, we start with the Turing Institute’s report, commissioned by the Council of Europe, “Artificial intelligence, human rights, democracy, and the rule of law: a primer” (Leslie et al. 2021), identifying and cross-checking the pertinent issues for education.

With regard to human rights, we examine the impact of AI & ED on a child’s rights to education, to human dignity, to autonomy, to be heard, to not suffer from discrimination, to privacy and data protection, to transparency and explainability, to be protected from economic exploitation and to withhold or withdraw consent for their involvement with any technology. With regard to democracy, we consider how AI & ED might both support and undermine democratic values, how democratic education, which depends on open access and equity, may be compromised by the dominance of commercial AIED applications, how certain tools promote individualism at the expense of the collaborative and social aspects of teaching and learning and the impact of AI models representing the world as a function of the past. With regard to the rule of law, we identify and examine several cases in which the use of AI algorithms in education have been subject to legal challenge – the use of historical school-level data to grade individual learners, learning data traces and biometric data. We then ask three key questions: Can children be required to use any particular AI system? Can AI ever meet the test of necessity and proportionality and be lawful at all? Must schools respect parents’ or children’s wishes or can they make the use of certain AI systems compulsory?

We end the report, in Part IV, with a conclusion and provisional needs analysis of open challenges, opportunities and implications of AI & ED, designed to stimulate and inform further critical discussion. Anticipated needs include: the need to identify and act upon linkages across the Council of Europe’s work; the need for more evidence of the impact of AI on education, learners and teachers; the need to avoid perpetuating poor pedagogic practices; the need for robust regulation, addressing human rights, before AI tools are used in education; the need for parents to be able to exercise their democratic rights; the need for curricula that address both the human and technological dimensions of AI literacy; the need for ethics by design in the development and deployment of AI tools in educational contexts; the need to ensure that data rights and intellectual property rights remain explicitly with the learners; and the need for the application and teaching of AI in education to prioritise and facilitate human rights, democracy and the rule of law.

Introduction

In 2019, the Council of Europe’s Committee of Ministers adopted a recommendation on digital citizenship education in which a key focus was the application of artificial intelligence (AI) in educational contexts:

AI, like any other tool, offers many opportunities but also carries with it many threats, which make it necessary to take human rights principles into account in the early design of its application. Educators must be aware of the strengths and weaknesses of AI in learning, so as to be empowered – not overpowered – by technology in their digital citizenship education practices. AI, via machine learning and deep learning, can enrich education… By the same token, developments in the AI field can deeply impact interactions between educators and learners and among citizens at large, which may undermine the very core of education, that is, the fostering of free will and independent and critical thinking via learning opportunities… Although it seems premature to make wider use of AI in learning environments, professionals in education and school staff should be made aware of AI and the ethical challenges it poses in the context of schools. (Council of Europe 2019)1

This report builds on these prescient observations and concerns to explore in detail the connections between AI and education through the lens of the Council of Europe’s mandate to protect human rights, to support democracy and to promote the rule of law.2 Accordingly, this is not a review of the more than 40 years of academic research into the application of AI in education (see Appendix IV for reviews of academic research of AI in education). Instead, it is a critical analysis of what is happening now, with AI tools developed by multi-million-dollar-funded commercial players increasingly being implemented in classrooms, in parallel with a growing demand from policy makers for AI curricula designed for school students. Globally, AI in education is often welcomed with enthusiasm – with many international reports and recommendations painting unquestioned glowing pictures (see Appendices II and III for lists of related reports). Here, to help rebalance the discussion, we take a more realistic perspective, specifically focusing on the many complex challenges raised by the connections between AI and education (AI & ED), to provide a holistic view in order to ensure that future developments and practices are genuinely for the common good.

The work was carried out in the context of the Digital Citizenship Education Project (DCE), which aims to empower children through education and active participation in the increasingly digital society.3 AI is fast becoming a cross-cutting issue that draws on, and relates to, other work undertaken by the Council of Europe’s Education Department, especially with respect to literacy and life skills. In addition, AI cuts across the Council of Europe’s directorates’ focus on data protection, children’s rights and competences for democratic culture.4

The Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI)5 was tasked with examining the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law. To this end, CAHAI focused its work on mapping relevant international and national legal frameworks and ethical guidelines, while analysing the risks and opportunities arising from AI. However, although otherwise comprehensive, the current work by CAHAI has not included education as one of its AI domains. CAHAI has now been superseded by the Committee on Artificial Intelligence (CAI).6

Accordingly, our motivation was to address this core gap, with a report that focuses on education as a key AI domain, and that is written for the Council of Europe’s core audience. The aim was to develop a high-level mapping of key topics and issues identified in the field, in order to complement CAHAI’s work, to enhance what is known more widely about the connections between AI and education and their impact on human values, and to provide a foundation for future related work.

The scope of the material reviewed for this report includes:

academic and peer-reviewed publications;

open access policy guidelines and frameworks including those developed by international, national and intergovernmental agencies; and

other relevant literature produced by civil society, regulators and protection agencies, and third sector organisations.

The report was guided by the following questions (all through the lens of the Council of Europe’s core values):

What is meant by AI and education, what does it involve, and what are its potential benefits?

What key issues and potential risks may arise in this context, and what are the possible mitigations?

What are the gaps in what is known, documented and reported, and what questions still need to be asked?

The review is organised into four main parts. In Part I, we map the connections between AI and education. In Part II, we identify and explore some potential challenges of AI and education. In Part III, we explore AI and education through the lens of the Council of Europe’s core values (human rights, democracy and the rule of law) and critically reflect on our findings. In Part IV, we conclude with a discussion and needs analysis of open challenges, opportunities and implications of AI and education. Our analysis includes the need to identify and act upon linkages across the Council of Europe’s work, to increase understanding in and between policy makers, of the challenges that AI poses across the directorates and member states, where children’s lives are affected, in and beyond the context of education.

In addition, this report also includes a list of alternative definitions of AI (see Appendix I), a list of related reports in this area (see Appendices II and III), a list of articles that review academic research in AI in education (see Appendix IV) and a list of examples of commercial learning with AI tools (see Appendix V).

Finally, in parallel with this report, the Council of Europe’s Digital Citizenship Education Unit is carrying out a survey of member states to better understand national initiatives linked to AI and education, and is holding a multi-stakeholder conference (September 2022). The survey and conference, together with this report, are all designed to help establish a foundation for the Council of Europe’s future work in AI & ED.

1 Recommendation CM/Rec (2019) 10 of the Committee of Ministers to member States on developing and promoting digital citizenship education, https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=090000168098de08.

2 The Council of Europe, Values, www.coe.int/en/web/about-us/values.

3 Council of Europe, “Digital Citizenship and education”, www.coe.int/en/web/digital-citizenship-education.

4 Council of Europe, Reference Framework of Competences for Democratic Culture (RFCDC), www.coe.int/en/web/reference-framework-of-competences-for-democratic-culture.

5 Ad hoc Committee on Artificial Intelligence, www.coe.int/en/web/artificial-intelligence/cahai.

6 www.coe.int/en/web/artificial-intelligence/cai.

PART IThe connections between AI and education

Following the societal changes brought about by the Covid-19 pandemic and its impact on the educational landscape and the use of digital technologies (Council of Europe 2021),1 exploring the link between the technologies of AI and education is timely:

Technology and innovation matter… but the picture is much more complex, much more non-linear, much more dynamic than simple plug-and-play metaphors. There can be dangerous unintended consequences from any single seemingly promising solution. We must reorient our approach from solving discrete siloed problems to navigating multidimensional, interconnected and increasingly universal predicaments. (UNDP 2020: 5)

It is precisely this complexity that we aim to address in this exploration of the connections between AI and education.

1 Higher education’s response to the Covid-19 pandemic: building a more sustainable and democratic future, https://rm.coe.int/prems-006821-eng-2508-higher-education-series-no-25/1680a19fe2.

1.1. Defining AI

In order to explore the multiple connections between AI and education, we first have to define AI. This is, however, immediately challenging. In fact, the description and boundaries of AI are contested, without a universally accepted single definition (see Appendix I for some examples of the different ways in which AI has been defined), and are constantly shifting:

[A] lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it is not labelled AI anymore. (Bostrom n. d.)1

Artificial intelligence, human rights, democracy, and the rule of law: a primer, prepared by the UK’s Alan Turing Institute (Leslie et al. 2021), draws on the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) Feasibility Study, and defines AI systems as follows:

AI systems are algorithmic models that carry out cognitive or perceptual functions in the world that were previously reserved for thinking, judging, and reasoning human beings. (Leslie et al. 2021: 8)2

Given that this definition itself contains words and concepts that are not immediately transparent for a general audience (e.g. algorithmic), we prefer a complementary definition that is provided by UNICEF (which, in turn, is derived from a definition agreed by the Organisation for Economic Co-operation and Development (OECD) member states):

AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context. (UNICEF 2021: 16)

We prefer this definition for several reasons. First, it does not depend on data, although it does accommodate data-driven AI techniques such as artificial neural networks and deep learning; second, it therefore also includes rule-based or symbolic AI and any new paradigm of AI that might emerge in future years; and third, it highlights that AI systems necessarily depend on human objectives and sometimes “appear to operate autonomously”, rather than assuming that they do operate autonomously, which is key given the critical role of humans at all stages of the AI development pipeline (Holmes and Porayska-Pomsta 2022). None of the multiple other definitions given in Appendix I has all these features. However, inevitably, the UNICEF definition is not perfect. An element that we find less helpful is the notion of an AI system “learning” – something that, it might be argued, requires the consciousness or agency that, now and for the foreseeable future, machine-based systems entirely lack (Rehak 2021). However, the use of anthropomorphic terms to describe these machine-based systems (including “intelligence”, “learning” and “recognition”, as in “facial recognition”) are so part of the AI narrative that, although distracting and unhelpful, they are unlikely to change anytime soon.

The term artificial intelligence itself was coined at a workshop at Dartmouth College in 1956. From that time, AI experienced periods of huge interest and grand predictions, punctuated by periods known as AI winters, when the grand predictions failed to materialise and so the funding all but dried up. From its earliest days, AI researchers have been interested in two parallel approaches. First, there is the “symbolic” AI approach, which focused on encoding principles of human reasoning and on knowledge engineering (encoding the knowledge of experts), and which led to “expert systems”. This approach is often referred to as “rule-based” or “good old-fashioned AI” (GOFAI). Second, although beginning at around the same time, there was AI inspired by how the human brain is structured (its neurons) and which draws inferences from usually large amounts of data. This artificial neural network (ANN) approach is one of several data-based approaches (which also include support vector machines (SVM), Bayesian networks and decision trees), which are collectively known as machine learning (ML).

In the late 20th century, most of the progress made in AI involved symbolic AI, but progress was stalled by multiple roadblocks, leading to the AI winters. In the early 21st century, thanks to much faster processors and the availability of huge amounts of data (mainly derived from the internet), ML became dominant – and it is ML that has led to most of the dramatic achievements of AI in recent years (such as automatic translation between languages3 and figuring out what shapes proteins fold into4). Interestingly, some researchers now argue that ML is soon to hit its own development ceiling, such that significant further progress will only happen if there is a new paradigm (which might involve bringing together GOFAI and ML) (Marcus 2020).

Despite some impressive achievements and its broad presence in everyday life, AI often suffers from overselling and hyperbole,5 which raises multiple issues:

The hype around AI can result in unrealistic expectations, unnecessary barriers and a focus on AI as a panacea rather than as a tool that can support positive impacts. (Berryhill et al. 2019: 27)

For example, AI systems can be brittle: a small change to a road sign can prevent an AI image-recognition system recognising it (Heaven 2019). They can also be biased, because the data on which they are trained is biased (Access Now 2018; Ledford 2019). AI language models such as GPT-3 (Romero 2021), again while