22,99 €
Everything is digital – whether it concerns the private sphere, work or public life. The technological progress involves both enormous chances and great risks. What are the social challenges we face? Which role does ethics play? Will the digital revolution necessarily serve the common good?Experts from various fields, among them computer science, economy, sociology and philosophy, address these questions and contribute to a necessary critical dialogue.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 464
Veröffentlichungsjahr: 2020
Edited byMarkus Hengstschläger and the Austrian Councilfor Research and Technology Development
Preface
Part I : Science, Technology and Society
Stefan Strauß / Alexander Bogner
Challenges to Democracy in the Age of the Digital Transformation
Peter Reichl / Harald Welzer
Achilles and the Digital Tortoise: Theses for a Digital Ecology
Sabine Theresia Koeszegi
The Autonomous Human in the Age of Digital Transformation
Part II: Artificial Intelligence
Sarah Spiekermann
On the Difference between Artificial and Human Intelligence and the Ethical Implications of Their Confusion
Anne Siegetsleitner
Who Bears Moral Responsibility in the Case of Autonomous Artificial Intelligence?
Anna Jobin
Ethical Artificial Intelligence – On Principles and Processes
Niina Zuber / Severin Kacianka / Alexander Pretschner / Julian Nida-Rümelin
Ethical Deliberation for Agile Software Processes: The EDAP Manual
Anita Klingel / Tobias D. Krafft / Katharina A. Zweig
Potential Best Practice Approaches in the Use of an Algorithmic Decision-Making System with the Example of the AMS Algorithm
Michael Mayrhofer / Gerold Rachbauer
Regulatory Aspects of Artificial Intelligence
Part III : Digital Transformation in the Health Care Sector
Charlotte Stix
Technological Enhancement as Seen through the Lens of Extended Cognition
Markus Frischhut
EU Values and Ethical Principles for AI and Robotics with Special Consideration of the Health Sector
Part IV: Shaping the Future
Christopher Frauenberger
The Negotiation of Technological Futures
Markus Scholz / Maria Riegler
Responsible Innovation: Corporate Responsibility and Collective Action
Elisabeth Stampfl-Blaha
Balance between Social, Economic and Technological Interests through Participatory Processes
Hannes Werthner
The Vienna Manifesto on Digital Humanism
Sepp Hochreiter
“The algorithm can learn anything – good things as well as bad things.”
Markus Hengstschläger
Our world is undergoing a digital transformation. This transformation is affecting and changing more and more areas of life and work: From our shopping, recreation and communication behavior all the way to the design of our cities (e.g. Smart Cities) and the reorganization of work processes by the advances of robotics and Industry 4.0. Almost every aspect of our lives now has a digital component. We expect this digital element to facilitate the transfer of inconvenient, monotonous or risky work to machines, to make decision-making processes faster and more efficient, to render more “objective” results and to make our lives smarter.
We sometimes use the data generated by and about us to improve our knowledge and, as a consequence, to control our behavior. However, mostly it is companies, public institutions and increasingly governments that are interested in data about us. Strategies to combat the Covid-19 pandemic which held the world in check in the spring of 2020 are also being formulated based on data and calculations regarding the rate at which the virus is spreading, the risk of infection, etc. Thus, the digital transformation can already be considered as one of the three great disruptive innovations of mankind, alongside the Neolithic Revolution, in which humanity became sedentary, and the Industrial Revolution. Indeed, the society in which we live now is being driven less and less by knowledge and more and more by data.
But what does a digitally controlled world mean? As with most new technologies the discussions currently alternate at best between hope and skepticism, at worst between uncritical belief in progress and alarmist apocalyptic scenarios. The discussion therefore needs to be objectivized based on specific questions: What challenges are already posed today, what challenges may be in store for us and what can we use as a guide in mastering them? How can we ensure that the digital transformation is used for the benefit of mankind without violating the rights and dignity of the individual? What moral rules are required for this in dealing with the available technologies?
As a geneticist, I see enormous opportunities in the digital revolution, provided that fundamental aspects are taken into account. We therefore need an ethical discussion involving a broad public and with the aim of using the latest innovations for the benefit of mankind. After all, technology is almost always used as a means to achieve goals or ends, which ultimately always leads to the question of the values that underly these goals.
Significantly more research and development is needed in order to realize the major potentials of the Digital Transformation. The aim must be to permanently optimize the corresponding legal framework conditions so that questions such as who bears ultimate liability for algorithm-based decisions are clarified. In addition, clear and transparent regulations are needed on who may use which data in the future and for what purpose. This applies to data generated in the publicly funded academic sector, but should also be considered for data owned by companies or large platforms. These data are of fundamental significance to science and research and should therefore be used in appropriately anonymized form, provided that data protection and privacy are ensured at the highest levels. The focus must always be on the people, both as individuals and collectively. Privacy by Design should be used so that systems, once they have been created, allow use of the data they contain exclusively for the originally agreed purpose. Furthermore, a widely diverse range of digital communication infrastructures has to be established, a landscape in which private, government and public-sector providers will have to coexist. Private monopolies, for example on search engines or social media platforms, must be questioned morally and ethically, as must state digital surveillance concepts such as those being developed (not only) in China. Another important factor is digital education, i.e. the ability to use new digital possibilities in a critical and reflective manner. This includes certain basic knowledge, for example the careful handling of information or the understanding of research processes necessary to avoid the danger of encountering fake news or filter bubbles. Knowledge of risks on the one hand and of the proper use of digital communication media on the other promotes evaluation and decision-making abilities and ultimately increases the autonomy of the human being. We also need digital education so that no one is left behind by the transformation process. Of course an appropriate education, including for example basic computer programming skills, continues to grow in importance, but the development of innovative applications requires the widest possible range of various skills, abilities, approaches and background knowledge. For example, bias awareness is just as important as diversity training when it comes to the detection and prevention of hidden discrimination in algorithms. And promotion and support of the widest possible range of creativity are necessary in order to be able to generate the most varied and diverse range of innovations possible using these new digital approaches. This requires close collaboration on the part of information and communications technologies not only with the Natural Sciences, but also in particular with the Humanities, Social Sciences and Cultural Sciences.
The present publication came into being primarily during the “Corona Crisis” (the Covid-19 pandemic of spring 2020) and thus bears the indelible mark of the difficulties associated with combatting the pandemic, in particular restrictions on movement. It is intended as a contribution by the Austrian Council for Research and Technology Development to the necessary discussion on the societal and ethical challenges accompanying the Digital Transformation. The essays submitted by 25 renowned authors address a broad spectrum of topics, but because of the large number of questions raised by the Digital Transformation can only cover a small part of this spectrum and are thus to be regarded as only an initial approach and as the stimulus for further discussions.
Part I features three essays addressing several fundamental societal challenges and ethical questions which we face as a result of scientific-technological developments, specifically those of the Digital Transformation.
Stefan Strauß and Alexander Bogner start off with the political implications of the digital transformation and specifically address those challenges that are relevant to democracy and democratic policy. Their central thesis is that the process of digitalization puts pressure on two fundamental principles of liberal democracy in particular: the institutional principle of intermediation as well as the ideological principle of reflected relativism. Here they focus in particular on the significance of social media and illustrate the extent to which digitalization of political systems supports anti-democratic and/or populist tendencies.
Peter Reichl and Harald Welzer then locate their contribution in the gap between the manifold promises of digitization on the one hand and its increasingly frequent negative impacts on the other. Working from the combined perspective of two separate disciplines (Computer Science and Social Psychology), the authors examine the question of why an ethical discourse on the consequences of the digital transformation is not sufficient and how a more comprehensive approach in the sense of a digital ecology might look. According to Reichl and Welzer, only a perspective that overcomes the segmented approach to digitalization prevalent to date can pave the way to the realization that the Digital Transformation is not only a social fact, but rather a force that penetrates all human relationships, including our individual relationships with ourselves and our environment.
Then Sabine Köszegi takes a critical look at the impacts of automated decision-making on human autonomy, bearing in mind that AI technologies are not or must not be an end in themselves. Based on this perspective, she defines the essential requirements that must be met by Artificial Intelligence (AI) systems in order to protect the right of self-determination for the individual and for society as a whole. She emphasizes the fact that we are currently in a significant phase, since technologies in many areas are still not very advanced, so that we can still influence their ultimate design. However, this window of opportunity could soon close, making it all the more urgent to design technologies to be “human-centered”.
Part II takes a closer look at so-called “Artificial Intelligence” and the potentials and hopes based on its rapidly growing capability to process enormous amounts of data; in particular this section also looks at criticism warning of the possibly irreversible consequences AI could have for society.
Sarah Spiekermann examines current trends to describe AI systems as “human-like” and even to consider rights for such systems. In her opinion these tendencies mean a massive encroachment on the freedom and dignity of the human individual with as yet unforeseeable social consequences. Spiekermann calls for an impartial comparison between the uncontested abilities of AI systems and the characteristic properties of human intelligence, posing the question of whether AI systems possess human-like information, think and react in a human-like manner and whether they also possess motivation and autonomy.
Anne Siegetsleitner addresses in more detail the concept of autonomy, which is central not only to our legal system but also to our everyday routine, posing the question of who ultimately bears moral responsibility in the case of what is referred to as autonomous Artificial Intelligence. She points to the fact that the technical term, “autonomous systems”, for example in vehicles, healthcare or combat robots, denotes only the “lack of dependence on humans” in the execution of tasks, but is not synonymous with the concept of autonomy used in ethical and moral philosophy. Indeed, AI systems do not truly act, but only behave as they have been programmed to, even if these systems are then said to “learn” from additional data. Autonomous actions in the moral sense, however, only exist when there are justifiable underlying intentions.
Anna Jobin looks at the question of why the call for “ethical” Artificial Intelligence has constantly become louder and louder in recent years and what principles are contained in the proposals and guidelines formulated to date. She concludes that serious divergences indeed exist in spite of the similarities determined in superficial consideration. Accordingly she points out the limitations of a principle-based ethical system for Artificial Intelligence. This should therefore only be regarded as a basis for discussion which must then be translated and weighted according to the specific application. Jobin sees this as a task for society as a whole.
The contribution from authors Niina Zuber, Severin Kacianka, Alexander Pretschner and Julian Nida-Rümelin examines the procedures necessary in order to develop “ethical” machines and software systems that comply with our legal, cultural and moral standards. As Anne Siegetsleitner has already pointed out in her contribution, so-called “autonomous systems” can in fact never act autonomously in a moral sense. For example the desire for a software system that independently delivers an ethically sound result by attempting to imitate the human capacity for deliberation exceeds by far the scientific possibilities of today. Nevertheless the authors use the EDAP scheme (“Ethical Deliberation for Agile Processes”) to provide a practically viable approach to developing systems while taking ethical considerations into account.
Subsequently authors Anita Klingel, Tobias Krafft and Katharina Zweig investigate the algorithm currently being used by the Arbeitsmarktservice (AMS)1 to determine the eligibility of unemployed individuals for support, a very concrete example of an AI system in action. They illustrate the requirements which should be placed on Artificial Intelligence-based systems used by the state, ranging from transparent conceptualization and operationalizing of the algorithm to the determination of how to deal with the results of this digitalized process. They also shed light on what was done right in the actual process of developing and implementing the AMS algorithm, where there is still potential for improvement and what other state stakeholders can learn from this example.
Finally, the article concluding this section of the publication takes a look at the challenges faced by the legislative sector and legal science due to the rapid pace of technological development in the field of Artificial Intelligence. Using selected examples, the authors Michael Mayrhofer and Gerold Rachbauer examine the existing legal framework for the development and use of Artificial Intelligence and discuss possible future regulatory opportunities and their limitations.
The essays in Part III look at several questions raised by digitalization in the health care sector, examining both topics relating to the individual and societal and political topics. Current discussions focus primarily on concepts such as Precision Medicine, Predictive Analytics, etc. In the future it is to be possible to combine many clinical findings, for example from Genetics, Radiology and laboratory medicine, with other clinical findings and interpret them using Artificial Intelligence. This would mean fundamental changes in the medical field.
With regard to the social impacts of digitalization in medicine, Charlotte Stix looks at potential new forms of Human Enhancement, i.e. a technological improvement, increase or optimization of the human performance, and the resulting ethical questions. She points to the difficulty of drawing clear boundaries and contrasts the line of argumentation of “dehumanization”, as put forward by the opponents and critics of this type of “technical improvement”, with the arguments of the supporters of Transhumanism and the “Extended Mind Hypothesis”.
Markus Frischhut offers an overview of those values and ethical principles for Artificial Intelligence and robotics which can be found with increasing frequency in the legal documents of the European Union. Beginning with developments in the area of Biotechnology, since the 1990s there have been increasing references to Ethics, although in terms of content these cannot be attributed exclusively to a single normative theory, such as Deontology, Consequentialism or Virtue Ethics. There are also many gaps which can be filled by applying shared values, in particular from human rights. Frischhut feels these values should be supplemented by specific principles oriented to the concept of “trust” as a higher-level objective.
Based on the fact that scientific-technological developments must be fundamentally understood as political developments, the fourth and final section of the present publication addresses specific approaches to shaping our future in the context of the Digital Transformation.
For Christopher Frauenberger the central ethical question is who we want to be as human beings in the future and whether digitization in its current form will actually make us into such humans. He calls for new public and democratic forums for negotiation in which these questions and possible technological futures can be negotiated, and discusses what a re- politicization of the design and development of digital technologies might look like. Frauenberger offers two perspectives as possible approaches: The participatory design of innovation processes and the concept of Agonism based on the concepts of Chantal Mouffe.
For the field of economics Markus Scholz and Maria Riegler then describe the existing regulatory gaps which characterize the Digital Transformation and from which companies have numerous opportunities for short-term economic optimization – with in some cases dire consequences for society and the environment. The authors therefore advocate not only for a corporate ethical perspective, which considers companies as Corporate Citizens and thus having rights and obligations, but rather primarily for “Collective Action”, understood as a form of collaboration among companies with the objective of jointly solving problems of social relevance.
The frequently demanded “participation” in the development of ethical approaches is Elisabeth Stampfl-Blaha’s topic. She regards “standardization” as the essential instrument for creating and/or retaining a balance between legitimate social, economic and technological interests. Then, after a survey of how standardization takes place, in her contribution she develops ideas on how progress with regard to the ethical challenges can be made in this process by digitalization.
In his contribution Hannes Werthner offers an overview of the methodological roots of Computer Science and its role as the leading science of the Information Society. He examines the “essence”, but above all the “mischief”, of Computer Science and discusses the monopolization tendencies which have led to the dominance and control of a small number of stakeholders at the technical, economic and political levels. Particular critical attention is paid to the role of digital platforms in this process. As a countermeasure Werthner calls for the orientation of Computer Science towards a “Digital Humanism” and presents the “Vienna Manifesto for Digital Humanism” formulated by Werthner and his colleagues. The Manifesto is to serve as a blueprint and point of departure for the development and formulation of basic principles for the future development of the Information Sciences.
The book concludes with an interview with Sepp Hochreiter, who addresses the ethical challenges of Artificial Intelligence from the perspective of a computer scientist as well as the potentials – and dangers – of training an algorithm with the corresponding data. He also describes the challenges of “explainability”, i.e. the attempt to understand what an AI system actually does when it “makes decisions”, and the associated question of responsibility. Hochreiter’s summary links the enormous hopes and expectations which can be placed in future possibilities based on Artificial Intelligence with the warning that this technology can also be sued by large corporations, criminal organizations and governments to mislead and manipulate the population.
Given the difficulties in the creation of the present publication resulting from the Covid-19 pandemic mentioned earlier, which have meant substantial challenges for the authors, I would like to thank all the authors for their willingness to participate and for their contributions which will serve as the basis for valuable discussions in future societal discourse.
1The Austrian State-supervised public corporation which functions as a national employment agency.
Stefan Strauß & Alexander Bogner
Not very long ago digitalization was seen almost exclusively as a great blessing for democracy. The Arab Spring was celebrated as a “Smartphone Revolution”, with the internet looking to be something like a digital echo chamber reflecting the democratic transition. In the 2012 US presidential election campaign the technology-friendly incumbent Barack Obama kept his Republican challenger at a healthy distance in the race for the White House. “Big Data Will Save Politics”, proclaimed the cover of the MIT Technology Review in early 2013. Only five years later, as the Cambridge Analytica scandal cast a long shadow over the fall of 2018, with fake news and unabashed hate communication everywhere in the net, the cover of the same journal asked almost with a sense of futility: “Technology is threatening our democracy. How do we save it?”
Today the fact that massive “ethical challenges” are arising in the course of the Digital Transformation as the publication’s headline suggests is beyond questions. The challenges for the economy and society associated with the Digital Age are just as clearly evident. Some cite the development of new monopoly structures resulting from the immense market power of the “Big Five” US technology companies (Staab, 2019). Others point to the threatening power of the algorithms that so opaquely control and influence our perception of the world and our political points of view (Pasquale, 2015). Another frequent topic is the threat to civil liberties resulting from the perfection of “surveillance capitalism” (Zuboff, 2018) and the threat to personal autonomy posed by digital identification technologies (Strauß, 2019). The latter threat highlights the tense relationship that has in the meantime developed between digitalization and democracy: If privacy is one of the cornerstones of democracy, democracy is in danger when privacy is threatened.
In the present article we take a closer look at those challenges democratic politics face as they arise in the Digital Transformation. We pay particular attention to the significance of platform audiences and social media for the intermediation, representation and staging of politics. We use several familiar examples to illustrate the extent to which digitalization of political activities supports anti-democratic or populist tendencies. Our central thesis is that two fundamental principles of liberal democracy in particular are under attack in the course of digitalization today: The institutional principle of intermediation as well as the ideological principle of a reflective relativism. The next section will illustrate what each of these two principles stands for.
In order to identify the potential threats to democratic politics resulting from digitalization, a more precise definition of the central cornerstones of liberal democracy is necessary. Immediate possibilities would be: Separation of powers, general suffrage, and civil liberties. These are fundamentally correct ideas, but for the present context, reference to the central technologies of formation of political will in democracy is decisive. This is necessary because the formation of will is not a direct act by the people, but rather is intermediated through certain entities, primarily through political parties and parliament and of course through the media. This means that modern democracy cannot technically be realized as direct democracy, but rather only as representative democracy, i.e. in the form or parliamentarianism (also when parliamentarian democracy can be supplemented with direct democratic elements). But more than technical reasons favor a parliamentary or party-based political system. The Founding Fathers of the USA already feared the direct formation of will based on unfiltered emotions and as a result decided in favor of independent parliamentarians and not judges that would be elected directly by the people.
Thus the purpose of the intermediate institutions in democracy (parties, diversity of parties, parliament, media) is to establish a distance between direct will, eruptive movements, and political decision-making. This is achieved by opening up spaces that allow free exchange of opinions and points of view in order to enable something like a deceleration of political decision-making while rendering the debate more rational. John Stuart Mill, in his famous treatment of freedom, argued that only the open reception of freely expressed opposition legitimizes the claim to superior knowledge (Mill, 1974, 29). Those who blithely brush alternative opinions from the table gain lose the opportunity to better justify their own positions in open and unprejudiced encounters with the opposing side. At the same time, Mill of course knows that not every type of dissent is always also productive. In liberal democracies institutional provisions intended to make a proliferating diversity of opinions truly positive are therefore required to make their own opinions relevant through the indirect path of the party organizations, or in the definition of electoral thresholds for the election of the national parliament. This way every political initiative passes through a multi-stage intermediation process.
The intermediary entities essential to liberal democracy ultimately symbolize the fundamental “educational claim” of every functioning democracy: Public formation of opinion and political decision-making involve advocating one’s own convictions without regarding the political opponent as an enemy; they involves grasping one’s own position as thoroughly superior, however without casting doubt on the principal legitimacy of the opposing position. In other words: The actual values of liberal democracy are not particular political objectives (e.g. “justice” in the Socialist model) but rather the norms of discourse, which are not explicitly defined anywhere. The erosion of such norms is a difficult problem, as has been evident for quite some time in the USA, where the extreme political polarization has in the meantime thwarted almost any attempt at reaching reasonable compromises. Accordingly, “Partyism” (Sunstein, 2015) – the hatred of persons solely based on their membership in a given party, with no other preconditions – is an ideology which is almost as destructive as racism or sexism.
In other words: The prerequisite for a functioning democracy is the ability to relativize one’s own position. Those who assume that they possess a privileged view of matters, without requiring open debate, in reality foster dogmatism, authoritarianism and intolerance. Here the will to engage in dialog arises exclusively on the basis of the capability to self-reflect, calling one’s own position into question and thus self-relativizing. The essential elements of democracy – changing majorities and protected minorities – can only be established under the prerequisite of this fundamental openness. The Austrian constitutional jurist and sociologist Hans Kelsen treated this close connection between liberal democracy and the worldview of relativism as early as the 1920s: “Those who consider absolute truth and absolute values of human knowledge to be closed must also not only hold their own opinion to be tenable, but must also see alien, opposing opinions as at least possible. This is why the worldview of relativism is a prerequisite to the idea of democracy.” (Kelsen, 2018, 132) If there was such a thing as the absolutely superior opinion (“Truth”) or, to put it differently, if a generally shared belief in the unrivalled superiority of a certain political measure existed, then all political debate would become obsolete. The intermediary institutions fundamental to liberal democracy would then be nothing more but superfluous ornamentation, at best suitable for impeding the work of the executive.
The principle of intermediation and the ideology of relativism are thus central prerequisites for liberal democracy. Both principles ultimately serve the purpose of facilitating “reasonable” politics by opening discourse. The following section will now look at how these fundamentals are currently under attack today in the course of digitalization.
As early as the 1990s, the early days of the internet and WWW, there was a generally euphoric atmosphere about a wave of democratization through the new media. McLuhan’s “Global Village” metaphor finally seemed to have become a reality. Indeed, society has since then changed drastically in political and social terms due to new forms of interaction and unmediated communication. In the initial phase this still took place by and large bottom-up through so-called grassroots organizations and individual communities of interest in specific niche areas. The establishment of what is referred to as “Web 2.0” began in the 2000s and social media (initially Facebook and later Twitter) become global mass phenomena which the political mainstream could no longer afford to ignore. This resulted in a renewed structural transformation of the public in which the new media played a central role (Lüter/Karin, 2010; Schrape, 2015; Barberi/Swertz, 2017). However, this role is highly ambivalent: There are a variety of major sub-audiences whose formation and spread has been considerably favored by social media, for example the current Fridays-for-Future movement, #Metoo and many others. To a certain degree, social media strengthen civil society, which now has access to a low-threshold instrument for injecting topics neglected by the political mainstream into political discourse through sheer generation of mass.
This impression of comprehensive democratization however stands in contrast to the fact that stakeholders in the party-political system in the meantime make highly intentional use of social media, on the one hand for widespread dissemination of political messages in a low-threshold and low-cost manner, on the other hand to constantly monitor activities and topics in the civil society. This strong ambivalence is well illustrated by the example of the brief Arab Spring: Activists critical of the government used Facebook and other channels for mobilization. This heightened the euphoria of several commentators with regard to the democratic potential of social media. But the fact was overlooked that the critical civil community had not spontaneously developed and organized on Facebook, but rather primarily “on the street” months and even years earlier. Ultimately social media turned out to be only one communication channel among many, although a channel with an enormously broad reach. But much more importantly: The protest movement was monitored by those in power – precisely because the activists were present online – for the purpose of effectively combatting them (Benkirane, 2012; Aichholzer/Strauß, 2016, 92 sq.). Today only very little of this brief wave of democratization still survives. This example also points to the fact that unequal distribution of power – in the “digital” world just as in the “analog” world – has a central impact on the structure of political discourse and political communication. Here the type and nature of the media are enormously significant.
For Habermas (1990) the (classic) mass media were an obstacle to political discourse more than anything because of their asymmetrical communication structure, since these media do not enable deliberation and negotiation of political content on equal terms between citizens and political players. At first glance social media appear to have strongly relativized this asymmetry through many-to-many communication and multimodal forms of interaction. After all, today anyone and every one of us can disseminate any content we like as broadly as desired, reaching a wide variety of sub-audiences. This remains true, although under the conditions asserted by commercial platform providers. Depending on the business model, content is specifically evaluated and commercialized. The potential consequences were starkly demonstrated by the scandal involving Cambridge Analytica and Facebook (cf. Ienca, 2018; HOC, 2019). Hopes for a broad democratization have therefore been replaced in the meantime by an increasing sense of worry about attempts to influence opinion by established political players who use their power to usurp interpretive authority over digitally disseminated political content.
Thus we can observe a more structured change in politics and in political mass communication. Social media enable a greater breadth of effect in disseminating content, and also a greater personalization. Social media and their digital platforms can therefore be understood as quasi-personalized mass communication media (Strauß, 2019, 99). Today politics primarily uses them to disseminate political messages (or advertising) cheaply and effectively. Former US president Barack Obama provided an example of how this can be done with a certain degree of objectivity. His successor Donald Trump however continuously demonstrates the dark side of unfiltered political communication on an almost daily basis.
Trump’s offensive, emotionally charged style of Twitter politics expresses nothing less than a fundamental mistrust in the intermediary entities of liberal democracy. An authentic, undisguised relationship between the ruler and the people is feigned. The populist greets the institutions of quality media and the parliament with fundamental suspicion, believing they have been “hijacked” by the “elite”. The populist wants to use social media to let “the soul of the people” be heard. This appears to work very well, in the USA just as in Italy and India, where Prime Minister Modi proudly points to his 40 million followers on Facebook.
Political content is pushed past the classic media, i.e. this content is not subject to any kind of journalistic inspection, nor is it subject to the need for rationalization which goes along with high-quality media discourse. This undermines the inspecting function of what is often referred to as the “fourth pillar of democracy”. This trend towards directness, towards for the most part unmoderated participation, is not good for political discourse. The Brexit has shown us what happens when democracy is lived out in the absence of intermediation, without being accompanied by reasonable balanced reporting and tempering discussion forums. Here too targeted attempts at influencing opinion have taken place using social media (in part with reference to Cambridge Analytica) whose impact has as yet not been completely investigated (HOC, 2019). But even without targeted attempts at manipulation, the same applies: When critical media and the public, and their function as objectivizing filters, are bypassed, liberal democracy quickly becomes a playing field for demagogues. Democracy ultimately lives on the moderating effect of its intermediary elements. Parliaments, parties and the established media ensure that the common will develops towards the “middle”, i.e. to a stance capable of admitting compromise. It is the task of classic media and journalism to use meticulous criticism to objectivize content to the extent that objectively oriented politics between political players and the general population is possible based on a framework of reason and rationality (logos). In recent years digitalization seems to have contributed to the destabilization of this basic framework, playing precisely into the hands of populism.
Sociology was quick to recognize the dangers connected with a politics of directness, of direct authenticity. Thus Sennett (1983) argued that political action in our society is increasingly being evaluated according to whether or not it is authentic and whether or not it offers an emotional value-added to the individual, but not in terms of the results of such action for society as a whole. Instead of political programs, interest is limited to the personality of the politician and how he or she “performs”. This results in a “politics of big emotions”: Instead of rational deliberation and verifiable argumentation, the individual avowal, the emotional appeal, and expressive actions are the dominant idiom. We recall: As the first cases of BSE became known in the United States in 2003, President George W. Bush dramatically ate a piece of domestic beef in the public eye. Other politicians have gone on-camera to eat genetically modified foods or drink water from polluted sources. Pathos replaces logos.
This shows that political communication according to the logic of pathos is not really new. However, in the Digital Age putting allegedly authentic feelings and emotional substance on display have become more prominent in the political sector, a tendency reinforced by social media. This is also in part due to the platform architectures and the logic of digital business models dominated by economies of attention and behavior. The degree to which content is networked, determined by the number of followers, clicks, shares, likes and so on has become a kind of currency and defines what is considered relevant. This in turn determines the market value of content. Social and political content is thus decontextualized and commercialized. Content is pushed to the background. Sheer mass trumps content, and this logic is visibly affecting the discourse of realpolitik: Parties have in part either developed or regressed to become political movements, since they can use their digital marketing strategies to attract large numbers of followers and in doing so to generate mass. The question of whether or not these masses really exist is seldom addressed.
Furthermore, the classic media also frequently report on the social media activities of politicians and replicate – sometimes involuntarily – the political spins behind these activities. As a result the boundaries between digital political PR and political content are blurred. One fundamental reason is the fact that although digital platforms were never created for use as political communication media, today they indeed function as such. It would be unacceptable to hold new media alone responsible for the current problems faced by democracy. But it would be even more myopic to expect a renewal of democracy from digital technologies simply because they are more interactive than their technical predecessors. Use of technology is ultimately always connected with the interests of power. It therefore appears to be all the more important to take a more precise look at those strategic players who increasingly attempt to instrumentalize digital media in the interest of obtaining political power instead of simply polemicizing against digital platforms.
As our analysis shows, the Digital Transformation is closely connected to a structural transformation in politics and forms of strategic political communication. Fundamental components in this structural transformation include lack of intermediation, (pseudo-)authenticity and a strong emotionalization of political messages that obstructs the willingness to reach compromise with political opponents and increases political polarization. Social media and related digitalization phenomena are in no way the responsible sources of this development, but their underlying logic and dynamics are an essential factor in reinforcing these trends.
This reinforcing effect is due more than anything to a logic of reduction: Digital platforms evaluate user content using predefined and purely quantitative relevance criteria (e.g. based on likes) which determine automatically what is to be considered relevant on the platform (e.g. with Facebook’s Edgerank algorithm2). The higher the rating, the more likely it is that the content in question is to be recommend to others (sometimes automatically). In other words: Content does not necessarily become viral because it has substantive, objective or political relevance, but rather because it achieves the highest degree of networking and thus creates the impression of mass. This reduces the number of actual messages. A simple formula often applies here: The shorter and more polarizing the message, the easier it is to disseminate.
Of course civil-societal players benefit from the opportunity to spread content with a low threshold too. However: The civil society also includes the Pegida and Identitarian movements and the Ku Klux Klan. And the oversimplified and extremely reductive content from these and similar movements is rarely suited to initiate a differentiated debate. Thus to a certain extent the medium controls the content of the message. This favors more populistic and anti-democratic forces over democratic politics which, as described, thrive on level-headed reasoning, moral reconciliation, persistent self-challenge, the shared volition to differentiate and on deceleration and rationalization.
As we have said: Social media are not the source of the described problems of reasonable politics. One could even posit that we cannot accept these media as plausible carriers of political communication until they bear out the success of a certain political style (pathos, directness, emotions). These media certainly amplify the problem described. Thus it is in any case worthwhile to look more precisely at the connection between a structural transformation of the political sector and the Digital Transformation. It goes without saying that this is not intended to discredit digitalization or social media: The continuing contributions of social media to the mobilization of public spirit and civil-societal commitment are indisputable.
This has become particularly evident during the Corona crisis in the spring of 2020. The political emergency measures taken this spring in many European countries, including strict border checks and restrictions on movement, created a paradoxical demand: Citizens could and should show their solidarity by avoiding social contact in particular. In the midst of this widespread social isolation, social media became the focal point of an initiative, started in Vienna, to organize neighborhood support groups. Campaigns were launched to network freelancers who were unavoidably out of work, to create information exchanges on home office operations and to provide advice to those impacted by quarantine: An international exchange using social networks. Twitter, otherwise known for the hate and malice often found in its content, became a “solidarity exchange” (Weiß, 2020).
Social media do however create problems for democracy and political discourse whenever digital marketing and political PR can be spread on digital platforms without any limitations whatsoever. Digital platforms in general and social media in particular threaten to monopolize power as they ultimately undermine the controlling function of quality journalism and the culture of deliberation maintained by classic media. This is made possible in no small part by regulatory gaps: A regulatory vacuum is particularly obvious when it comes advertising by political parties and empowered political stakeholders. Politics can currently disseminate advertisement in digital channels by and large without regulation, spreading targeted individual messages, enlisting new followers, etc. in order to generate mass and political momentum. Stronger regulation of political advertising online should be considered in order to prevent an increase in unfiltered political PR spread by digital media. There is good reason for the strict regulation of political advertising on radio, TV and in print media. Politics and media must be called on to jointly find a way to comply with basic ethical and journalistic principles in order to keep the cornerstones of democracy stable.
2https://en.wikipedia.org/wiki/EdgeRank.
Aichholzer, G. / Strauß, S. (2016): Electronic Participation in Europe. In: R. Lindner, G. Aichholzer und L. Hennen (Ed.): Electronic Democracy in Europe. Prospects and Challenges of E-Publics, E-Participation and E-Voting; Cham et al.: Springer, pp. 55–132.
Barberi, A. / Swertz, S. (2017): Strukturwandel der Öffentlichkeit 3.0 mit allen Updates? In: U. Binder und J. Oelkers (Ed.): Der neue Strukturwandel von Öffentlichkeit. Reflexionen in pädagogischer Perspektive. Weinheim: Beltz Juventa, pp. 151–179.
Benkirane, R. (2012): The Alchemy of Revolution: The role of social networks and new media in the Arab Spring. GCSP Policy Paper, No. 2012/7, Geneva Center for Security Policy.
Habermas, J. (1990): Strukturwandel der Öffentlichkeit. Untersuchungen zu einer Kategorie der bürgerlichen Gesellschaft. Frankfurt am Main: Suhrkamp (Orig. 1962).
HOC – House of Commons (2019): Disinformation and ›fake news‹: Final Report. Eighth Report of Session 2017–19, HC 1791, 18 February, https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf.
Ienca, M. (2018): Cambridge Analytica and Online Manipulation. Scientific American, March 30, https://blogs.scientificamerican.com/observations/cambridge-analytica-and-online manipulation/.
Kelsen, H. (2018): Vom Wesen und Wert der Demokratie. Stuttgart: Reclam (Orig. 1929).
Lüter, A. / Urich, K. (2010): Editorial. Strukturwandel der Öffentlichkeit 2.0. Forschungsjournal Soziale Bewegungen (23)3, pp. 3–7.
Mill, J. S. (1974): Über die Freiheit. Stuttgart: Reclam (engl. Orig. 1859).
Pasquale, F. (2015): The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
Schrape, J.-F. (2015): Social Media, Massenmedien und Öffentlichkeit – eine soziologische Einordnung. In: K. Imhof, R. Blum, H. Bonfadelli, O. Jarren und V. Wyss (Ed.): Demokratisierung durch Social Media? Mediensymposium 2012. Wiesbaden: Springer VS, pp. 199–212.
Sennett, R. (1983): Verfall und Ende des öffentlichen Lebens. Die Tyrannei der Intimität. Frankfurt am Main: Fischer (amerik. Orig. 1977).
Staab, P. (2019): Digitaler Kapitalismus: Markt und Herrschaft in der Ökonomie der Unknappheit. Berlin: Suhrkamp.
Strauß, S. (2019): Privacy and Identity in a Networked Society. Refining Privacy Impact Assessment. Abingdon, Oxon: Routledge.
Sunstein, C. R. (2015): Partyism. In: University of Chicago Legal Forum, Vol. 2015, Article 2. https://chicagounbound.uchicago.edu/uclf/vol2015/iss1/2.
Weiß, A. (2020): Hilfe statt Hass und Hetze. In: Frankfurter Allgemeine Zeitung, 16. März 2020, p. 7.
Zuboff, S. (2018): Das Zeitalter des Überwachungskapitalismus. Frankfurt/New York: Campus.
Peter Reichl & Harald Welzer
“Listen and I will tell you of a wondrous land, a land many would abandon home for, if they only knew where it lay. The houses are roofed with pancakes and the fish swim on the surface of the water, already baked or poached. And anyone who is too comfortable needs only call out: Alexa! – then the fish come up onto the dry land and jump right into your hand. And you can believe me: The birds fly about in the air already deep-fried! And those overtaxed by the effort of reaching up to grab one in flight need only bid Uber Eats to guide the bird straight into the waiting mouth. And the money grows on trees like Bitcoins. All one needs to do is shake the tree, glean out the best of what falls and leave the rest where it lies. In this country there are also great forests where the most beautiful of garments grow on the trees, in all colors, black, green, yellow, blue, red. Should you need new garb, simply stroll into the forest and eBay will pluck it for you, or Amazon will knock it off the branch. Should your wife no longer be young and pretty enough, simply exchange her on Tinder for a younger, more splendid spouse; the older and nastier ones are sent to the Photoshop. The citizens who can best bait and spoof people on YouTube receive a like, and the greatest of fabricators is certain to be rewarded with followers. In our land here some lie, prevaricate and fib without end, with nothing to show for their toil; but lies on Twitter and Facebook are seen as the highest form of art. And fun and games abound there; he whose fortune has abandoned him may try his luck there on win2day. Those of unsteady aim, who miss the mark here, will be crack shots there in Doom and Counter-Strike. The benighted souls who can do nothing but eat, sleep, drink, dance and play will there be anointed as influencers. And he who dubs the general right to vote the laziest, foulest and most good-for-nothing idea there is… he is made king of the entire realm and enjoys great wealth. Now you know the ways and wonders of Cockaigne. And those who would journey there and know not the way, they need only consult the wise Google Maps …”3
The year is 1845, a mere ten years after the inauguration of the first railroad segment in Germany, between Nuremberg and the town of Fürth, as Ludwig Bechstein in his “German Fairy Tales” renders such an illustrative portrait of the technological future that it is enough – as we just have – to simply replace a few words with modern vocabulary, and we feel immediately in the midst of the year 2020. And this is just a tiny fraction of what the Digital Transformation promises us, day in, day out: That society is becoming more democratic, access to knowledge and education is available more comprehensively and at lower thresholds, social networks provide platforms for transparent and open discourse, inclusion and mobility are being promoted, scarce resources can finally be used efficiently and on top of that freedom and perpetual economic growth are manifesting – and those are only some of the blessings and promises associated with the establishment of the internet as a new fundamental economic and social infrastructure of the 21st century. Put concisely, the Digital Transformation tells us we appear to have made it – the ancient dream of mankind has come true, welcome to the digital land of plenty.
But somehow almost every glance at the newspaper paints an entirely different picture. We see a heated discussion unfolding about what we actually find in this supposed Cockaigne: authoritarian regimes and Twitter democracies, fake news and hate speech, one hacker attack after the other, a growing digital divide and an increasingly torn society, let alone issues like the NSA affair4 and surveillance capitalism. And behind all these phenomena we find even more fundamental questions: Is it really true that the Digital Transformation is simply occurring, or is it much more the case that we are causing it? Are there values behind all these developments, and if so, what are they? Do we have the technologies we need, and do we need the technologies we have? Has anyone ever even asked us? And who will ultimately take on responsibility for what is already here and what we’ll be facing in the future?
The present essay addresses several thesis-like considerations from the combined perspectives of two disciplines which take on central significance when it comes to outlining the political and societal impacts of the Digital Transformation. On the one hand we have Computer Science, the origin and fundamental mold of all things digital, which is however conspicuous by its frequent absence from public discourse. After all, the Digital Transformation is not simply there: We have created it, and computer scientists in particular are shaping this transformation. Computer scientists will therefore have to become more involved in the societal discussion, as they bear rather more responsibility than they may like to admit.
On the other hand we have the socio-psychological perspective, a scientific view of how people are attempt to make sense of what is happening in their world, to draw conclusions from it and to make decisions based on it. All the technological developments with economic significance and widespread implementation change human perceptions, self-images and frequently alter human opinions about who the others are and how to live together with them. Although the social sciences have not yet focused their attention on digitalization as an “invasive technology” (Zuboff, 2018), they eventually will – as usual with some delay: The impacts on labor and labor markets, business models, infrastructures and the fabric of society are on the whole too deep to be permanently ignored. The first contributions to theoretical formation (Nassehi, 2019; Baecker, 2018, among others) have already appeared.
Uniting these two perspectives will provide us with a fresh, unadulterated view of how an honest dialog between Computer Science and society can be made possible when Computer Science accepts its role as an increasingly political discipline, and takes on the associated responsibility. We will see that ethical discourse, as promising as the first occasional contributions may be, is not fully adequate. Indeed we have to focus on the entire human being, even more: We need a comprehensive approach, a digital ecology.
First let us look back on the quarter century which has passed since the arrival of the internet and the world wide web. For quite a long time the brave new world of “Dig-Italy” was associated with something light, playful, amorphous. We were happy about what was apparently entering our lives so smoothly and perfectly, while complacently accepting the fact that not everything always worked right from the start – it’s easy to forgive things represented by images as heavenly as “Clouds”. The internet had so many useful things to offer that we completely overlooked the fact that much of it might turn out to be only an interim “collateral benefit”, valid only until the machines reconquered “their” networks (Hofstetter, 2014). It was not until Cambridge Analytica5 turned its digital attention to what had until then been a rather democratically organized election process, not until China began implementing its Social Scoring6 plans, not until the West began to notice the potential impacts of a ubiquitous “Internet of Things” (IoT) on the future working world and not until the development of “Artificial Intelligence” (AI) towards a possibly apocalyptic singularity turned out to be at least fundamentally conceivable did it become clear that the “kid-glove treatment” which Information and Communications Technologies (ICT) had enjoyed until then would soon be a thing of the past.
And today we stand at a crossroads: The Digital Transformation of business, politics and society has serious consequences. And on top of that is something which first emerged in an apparently very different context, the current climate discussion: The “feeling of shock when everything suddenly becomes very simple,” as Maximilian Probst put it when he realized that climate change is actually an intellectual insult to modern people (Probst, 2019). In reality the issue is simple: The modern economic metabolism of growth, damaging the climate system with emissions, will make this metabolism untenable in the medium-term time frame. But ever since Kant and Hegel we have been trained in the art of deliberative dialectics and finding compromise between one side and the other, the power of democratic practice. This makes it difficult for us to accept the fact that the world could still contain such simple truths. Of course simplicity vanishes in the very moment that we leave the realm of measurement and calculation, of scenarios and models, and move on to a world of politics and society. But this of course alienates the representatives of the exact sciences. And the case of the Digital Transformation is the same as that of climate change: The phenomenon is clearly evident; what remains unclear is what it means for it to be clearly evident.
An example: In the well-established technology magazine Wired David Baker formulated his theory in late 2017 that the internet was broken and urgently in need of repair (Baker, 2018). Those who took action and participated in the ensuing discussion included Tim Berners-Lee, Jaron Lanier and Vint Cerf, among others, celebrated names from the days of the internet’s founding. This was probably the first time the technology community had so clearly enumerated the many hopes and promises from the early days of the WWW that simply never came to pass.
But, as we argue, this is not due to a broken internet. Instead, the internet is simply what it is: It is not made for secure communication and was hardly conceived as a fundamental economic infrastructure, nevertheless containing wonderful opportunities that we naturally are to use and enjoy. Maybe we should compare it as a medium to the air: We don’t complain that air is not a “secure” medium when we talk to our seatmate in a train and realize that a third passenger is listening along attentively. Instead, in this context we know about the limitations of interception-proof communication and behave accordingly. Only in the case of the net we seem to believe in an unlimited level of oversight instead of simply accepting that with appropriate measures the net could at best be a little bit more secure, but that we can never be really sure – and that in some applications there are worlds of difference between security and “security”.
Thus it is important to become aware of digital technology’s limitations: Even with all the contextual complexity of the inter-relationships of today’s society, sometimes these relationships are indeed very simple. Digitalization in particular provides a large number of other examples, when we stop to think about their dependence on an uninterrupted supply of electricity (Welzer, 2017). There can be nothing less useful during a blackout than an iPad or Alexa with empty batteries.
And we’re not talking about hostility towards technology or even ludditism. The point is simply that we should no longer close our eyes benevolently (or in vicarious embarrassment) when yet another gigantic data security breach is announced, when yet another video conference system collapses, yet another attempt to use a video projector in the lecture hall refuses to work. We are much more in need of a new and clear point of view when it comes to the uses and limitations of digital technologies, so that we can emancipate ourselves from the currently prevalent narrative of being overwhelmed (Pörksen, 2019).
We therefore call urgently for more “discourse hygiene”, to borrow an expression coined by Maja Göpel, so that we can reorient our imaginative, legitimative and communicative spaces (cf. Göpel, 2020). We are neither interested in alarmism nor in the adoration of digital technologies; we are interested in their societal utilization, which should be defined according to criteria essential to a liberal constitutional democracy, ensuring privacy as an incontrovertible condition for democracy, preventing harm to its citizens, contributing to services in the public interest and guaranteeing egalitarian access. Even these few criteria outline a task which has as yet hardly been systematically debated. Instead, digitalization appears to be implemented without regard for interests, simply to “happen” – a perception which is simply tuned out by the eminently political character of its implementation.
Let us take for example the expression, heard in the meantime almost every day, that proclaims “Data is the new oil!”, this epitome of supple, smooth and yet concentrated energy that appears to inexhaustibly (even if we of course know better) flow up from the ground to drive and lubricate our economic lives with unprecedented force.
But comparing data to oil is nonsense. Shoshana Zuboff has demonstrated in an impressive manner that data are not a natural
