0,00 €
Algorithms are not to be regarded as a technical structure but as a social phenomenon – they embed themselves, currently still very subtle, into our political and social system. Algorithms shape human behavior on various levels: they influence not only the aesthetic reception of the world but also the well-being and social interaction of their users. They act and intervene in a political and social context. As algorithms influence individual behavior in these social and political situations, their power should be the subject of critical discourse – or even lead to active disobedience and to the need for appropriate tools and methods which can be used to break the algorithmic power.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 611
Veröffentlichungsjahr: 2022
Sven Quadflieg Klaus Neuburg Simon Nestler (eds.)
(Dis)Obedience in Digital Societies
Perspectives on the Power of Algorithms and Data
This publication was made available via Open Access within the framework of the funding project 16TOA002 with funds from the German Federal Ministry of
Education and Research.
Bibliographic information published by the Deutsche Nationalbibliothek
The Deutsche Nationalbibliothek lists this publication in the Deutsche
Nationalbibliografie; detailed bibliographic data are available in the Internet at
http://dnb.d-nb.de
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 (BY-NC-ND) which means that the text may be used for non-commercial purposes, provided credit is given to the author. For details go to http://creativecommons.org/licenses/by-nc-nd/4.0/To create an adaptation, translation, or derivative of the original work and for commercial use, further permission is required and can be obtained by contacting [email protected] Commons license terms for re-use do not apply to any content (such as graphs, figures, photos, excerpts, etc.) not original to the Open Access publication and further permission may be required from the rights holder. The obligation to research and clear permission lies solely with the party re-using the material.
© 2022 transcript Verlag, Bielefeld
Cover layout: Klaus Neuburg, Sven Quadflieg
Cover illustration: Klaus Neuburg, Sven Quadflieg
Typeset: Klaus Neuburg, Sven Quadflieg
Printed by Majuskel Medienproduktion GmbH, Wetzlar
Print-ISBN 978-3-8376-5763-0
PDF-ISBN 978-3-8394-5763-4
EPUB-ISBN 978-3-7328-5763-0
https://doi.org/10.14361/9783839457634
ISSN of series: 2702-8852
eISSN of series: 2702-8860
Printed on permanent acid-free text paper.
Sven Quadflieg / Klaus Neuburg / Simon Nestler(Dis)obeying Algorithms?Introductory Thoughts on the Power of Algorithms and the Possible Necessity of Resisting it
Florian ArnoldThe Dialectics of Dis-ObedienceNotes from the Crystal Palace
Johanna Mellentin / Francesca SchmidtSurveillance, Artificial Intelligence and Power
Fabian WeissEmbodied Algorithmic OptimizationHow Our Bodies are Becoming a Product of Code
Carolin HöflerThe Lock Down City and the Utopian Program of Open Interfaces
Moritz AhlertHacking Google Maps
Harald Trapp / Robert ThumThe Algorithmic Construction of Space
Christina HechtTorn Between Autonomy and Algorithmic Management(Dis)Obedience of Solo Self-Employed Working via Digital Platforms
Victoria Guijarro SantosA Crack in the Algorithm’s FacadeA Fundamental Rights Perspective on “Efficiency” and “Neutrality” Narratives of Algorithms
Katja DillWhen Search Engines DiscriminateThe Posthuman Mimesis of Gender Bias
Fabian LützDiscrimination by CorrelationTowards Eliminating Algorithmic Biases and Achieving Gender Equality
Matthias PfefferThe Power of Algorithms and the Structural Transformation of the Digital Public
Lotte HouwingReclaim your Face and the StreetsWhy Facial Recognition, and Other Biometric Surveillance Technology in Public Spaces, Should be Banned
Bernd Friedrich SchonIdentity 5.0: How to Fight Algorithms Online (Fast)Heuristic Compressions of Personality Concepts (Dis)Obedient to Algorithmic Power— from Film, Television and a Cult Classic Novel
About the Authors
We live in a time when terms such as digital transformation are constantly being used and in which—as the series in which this book is published also indicates—people often talk about our digital society. Data and algorithms, it seems, have a great influence on social developments and they are often given so much importance that they are described as shaping an era. This is reflected in the title of this book, which also addresses the concept of a digital society and refers to algorithms and data. At the same time—it should be noted at the outset—the technical aspect of this phenomenon is not of significant importance in the following pages. Of course, for reading the subsequent contributions it would be useful to understand how algorithms work and why data has such a great influence on their functionality—but it will not be explained in depth at any point in the book. This is because, instead of technical details, this book only considers the impact of this technical structure on those who consciously or unconsciously interact with it: The humans and the society they form.
In the context of digital transformation, people and computers are entering into a fascinating symbiosis, something which Joseph Carl Robnett Licklider predicted as early as the 1960s, when he said that growing computing capacities would lead to new ways of using computers. The very notion of computer use is exciting here, as we probably associate it with false images; after all, computers are ubiquitously integrated into our physical environment and not even always recognizable as computers. This new kind of computer use is all-encompassing: It affects our private lives as well as major societal developments—computers and the algorithms that work within them have an impact on our lives whose dimension is almost impossible to grasp.
This influence naturally takes up a considerable amount of space in this book—however, the title already indicates that we are most interested in strategies for how individuals, groups, or even civil society in general can break this influence through forms of disobedience. In the title of the book and its implied dichotomy, we want to illustrate the complexity of the topic: Accepting algorithms is automatically a form of obedience—it can only be actively resisted—and every form of obedience should always provoke a debate about disobedience and gathering the power to question it. After all, many of the achievements of today’s society are based on active disobedience or resistance.1 The problem, as Howard Zinn provocatively put it, is not disobedience, but obedience.2
Disobedience, or civil disobedience, often has a moral level: For the most part, acts of civil disobedience are called for to remedy an existing injustice or to remove oneself as an active person from a political or social process and thus reduce one’s own guilt. For example, Henry David Thoreau coined the term civil disobedience by arguing in his 1849 essay that one should bow to the law of the state only if it is consistent with one’s own moral values. Thoreau argued that as a citizen he did not want to be complicit in the war against Mexico and the slavery that followed3, mitigating his guilt by refusing to pay taxes. Thoreau thus not only shaped the concept of civil disobedience, but also inspired many subsequent civil rights movements and civil rights activists (including Martin Luther King4)—despite his libertarian stance. Thoreau’s theoretical statements are based on his practical actions and experiences—at one point he did in fact refuse to pay a tax payment and was imprisoned for a short period of time. Thus, Thoreau can already be counted as one of numerous examples of applied disobedience (whether his act was actually a justified example of civil disobedience is still a matter of debate5). Prominent events followed on different scales: Numerous major political transformations of the 20th and 21st centuries were based on strategies of civil disobedience—for example, the Indian independence movement—and various resistance and protest movements of different kinds—from Solidarność to Otpor!—adopted strategies of civil disobedience to protest against governments (at this point we should also refer to Gene Sharp, whose guidance on nonviolent resistance inspired protest movements all over the world6). The moral level of civil disobedience is an important point here that is always a focus of discussion—for example, in John Rawls7 or Hannah Arendt8—and certainly evokes intriguing questions in the context of this book: What legitimizes a form of resistance?9 What can be described as moral justification or even as duty?10 And what are the limits of this justification?11
The questions about legitimizing possible acts of resistance (the dialectic of [dis]obedience is discussed in Florian Arnold’s contribution right at the beginning of this book) take on a new quality in times of the digital transformation mentioned at the beginning. This is due to the lack of transparency (which will be discussed later) and requires its own nuanced consideration. In this book, very different perspectives are brought together in order to open up a broad discursive field. This approach is illustrated in the subtitle: Perspectives on the Power of Algorithms and Data. The very first word is itself up for debate, as it is not altogether precise: What appears in the following pages represents only an academic, Eurocentric subset of possible perspectives on a topic that is relevant almost everywhere, but is of course evaluated differently by individuals in other cultural, geographical, or social contexts. Algorithmic power is a global phenomenon and, depending on context and perspective, a global problem, and this book can, of course, only examine a small corner of it. Algorithms are often discriminatory—particularly through the data with which they are trained—and our view of the phenomenon comes from a privileged position in that we are not usually victims of that discrimination. While some contributions at least address discrimination in the context of sexism and perhaps classism (though here not from the perspective of the victim), there is a lack of consideration of discrimination, at least in the context of racism12, so the perspectives mentioned here are fairly limited. This is particularly relevant because the book aims to demonstrate that algorithmic power does not, of course, act uniformly across society. Rather, it acts as an amplifier of the existing power structure—it repeatedly displays aspects of group-focused enmity when it makes sexist or racist decisions, for example.
But what exactly does power structure mean in the context of this book? As already indicated, algorithms shape human behavior on an individual and societal scale: They influence not only our perception of the world, but also the well-being and social interaction of us as their users. They act in the political context, create social realities, and intervene in concrete social situations—for example, in public spaces. Algorithms are thus not to be regarded as a purely technical structure, but as a social phenomenon. They embed themselves—currently still very subtly and with little transparency—in our political and social system: In digitized contexts, they determine what we read, what we consume, whom we perceive, whom we ignore, what we like, what we hate, whom we love: in short, how we live (and perhaps even how we die—after all, weapons also exist that autonomously kill people based on algorithms13). These effects of algorithms on our lives have been the subject of much discussion: Lev Manovich, for example, outlines the impact of algorithms on our cultural development and their influence on our aesthetic perception by suggesting what we should consume.14 And without wanting to judge, we obey already because we let them recommend things to us by giving away our data and accepting suggestions based on it.15 It is difficult to attribute agency to algorithms, since they are first and foremost merely mathematical representations of facts and processes.16 Nevertheless, they can be seen as a power structure, even if, strictly speaking, it is not algorithms that exercise power—rather, power is exercised with them.17 We believe, however, a legitimate form of simplification is valid here: Since social actions, following Bruno Latour, are ultimately always embedded in a network of things, and artifacts are always embedded in a network of actions18, by power structure in this context we mean the influence that algorithms exert on individual and social life19. This immediately leads to a major problem: algorithms often evade this discussion. It is precisely in the context of the algorithmic black box that the question of power urgently needs to be discussed—or, to be more precise, the question of responsibility. Andreas Matthias describes this as a “responsibility gap”: We find ourselves in a situation in which autonomous machines based on algorithms make decisions whose decision-making path (also for future decisions) disappears into the deep black of the black box. It thus raises not only the question of who exercises power at this point, but also the moral question of who has the responsibility for this power.20
A list of examples of this algorithmic power could almost be endless—our private moments in particular are shaped by it. In a digital and datafied world, algorithms have an influence on our well-being—for example, through social media—and on the way we perceive the world; and it quickly becomes clear that algorithms also have a political dimension. The influence of algorithms on a large scale has been detailed in the context of the 2016 US election campaign21, but the phenomenon can be described much more broadly. For example, Ivana Bartoletti suggests that the rise of populism can also be seen as a partial victory of algorithms over our information society—she links the electoral successes of Viktor Orbán, Jair Bolsonaro, or Rodrigo Duterte to the influence of algorithms.22 It can thus be seen as an irony of history that the advancing intelligence of our technology, of all things, also seems to contribute to the political regression prevailing in some countries. Bartoletti concludes that AI has the potential to fundamentally transform our society23 and links this to oppression: “And if we can understand how AI is related to oppression, then we can understand how to resist it.”24
While discussing algorithms, AI is a fascinating term, but it is deliberately omitted from the title of the book, because we include in our considerations both algorithms developed by humans and algorithms developed by computers. Computers can—without going into the many differentiations of this type of learning—develop algorithms, for example, by the computer itself learning25 with a kind of neural network and a large26 amount of data. In a very simplified way, the human programs the computer’s brain and the computer programs—based on data—a suitable algorithm. In the first case, people determine the behavior of the computer by programming the algorithm; in the second case, they determine this behavior primarily by selecting the data.27 However, as exciting as the exact functioning of artificial intelligence may now be, there are few concrete implications that can be derived from it for the discourse in the context of this book. This book, as mentioned, is concerned neither with how algorithms work nor how they learn—but with what power structures and dependencies arise through algorithms and data in our everyday lives and in our society. The tension created by algorithmic power arises primarily from its interface with society: What is of interest to us is not the structure of algorithms, but the interaction with them or the problems created by their existence. It is not important how an algorithm is implemented, but how it acts—and above all, what the result of its existence is. However, since users usually have no knowledge at all about which parts of an algorithm come from a programmer and which parts of an algorithm have been developed with the help of a neural network, this would only distort our focus, as that discussion is on the one hand too technical and on the other hand too biased towards the perspective of computer science. Since many of the currently available books on the topic already give a lot of space to the computer science perspective, we deliberately decided not to put this perspective at the center of our discourse. The following contributions thus show perspectives from very different disciplines28—and our desire for multi-perspectivity at this point outweighs the technical view of how algorithms work.29
The perception of such algorithms is always also characterized by the lack of transparency, because it is not always communicated when and why algorithms are used. It also usually impossible to work out how and why an algorithm makes a decision, which reinforces the aforementioned problem of responsibility and the general moral dimension. At the same time, it is interesting to note that there is a relatively high level of trust in algorithms30—which seems absurd in view of their opaque decision paths and the countless reports of algorithm failures. We perceive algorithms as cold, mechanical, logical, deterministic systems that provide the only correct answer after fully considering all possible options. In the end, the decisive thing is not how an algorithm arrives at its results: What is important is that algorithms are quite prone to error and sometimes simply want to make statements about the future based on bad data31 from the past (which makes them sexist or racist, for example) and that humans—as Hannah Fry, among others, explains in detail—often do not question these results32. Indeed, the question of the function and precision of algorithmic decision-making leads to a very fundamental question: Why do we obey algorithms and computers at all?33 Even filter bubbles generated by algorithms are now perceived by certain political movements to be more objective than content curated by journalists.34
In the early days of computer technology, when for example Alan Turing defined his abstract Turing machine, the computer behaved analogously to a human, but ideally infallible, mathematician. If the computer was wrong, it could be traced back to a programming error or to a technical defect. But under conditions characterized by uncertain and incomplete information as well as by scarce time resources, two factors are often decisive: On the one hand, a fast approximation to the correct solution is often more important than the time-consuming, exact calculation of a solution, and on the other hand, thinking in terms of correct and incorrect solutions is in itself inappropriate. In addition, there are no exact questions in communication with people or in the search for information, so that the computer cannot provide exact answers in the mathematical sense either. To conclude the brief discourse on algorithms, the lack of transparency still needs to be addressed, because this aspect is often perceived as a central deficit of algorithms generated by artificial intelligences. But here, too, our look at social implications, our shift from white box to black box, shows that this aspect applies to all algorithms. We do not know the code or the exact workings of the algorithms that dominate our everyday lives.35
This brings us to the last term in the subtitle: Data. A joke that circulated on Twitter a long time ago went like this: “What idiot called it ‘machine learning’ instead of ‘bias automation’”36? Learning processes such as machine learning are based on data, from which specific behaviors are then extracted—the quality of the learning thus depends crucially on the quality of the data used. Biases in computer systems are a complex issue—and they are not only relevant in the context of AI. Batya Friedman and Helen Nissenbaum, for example, distinguished between different types of bias: First, there are preexisting biases, i.e. existing biases that are adopted consciously or unconsciously.37 Friedman and Nissenbaum also refer to technical biases—biases that arise, for example, from technical constraints or technical considerations38—and emergent biases, which can arise as a “result of changing societal knowledge, population, or cultural values” over time.39 Biases in AI are an important topic to keep focus on: Since the data with which the algorithm is trained is rarely neutral40, its actions afterwards are not either—often they only recreate an unjust past41 or a (discriminatory) status quo, thereby perpetuating inequalities42. Algorithmic biases are thus ultimately a data problem: Large amounts of data often reflect an unjust world by reproducing a white male heteronormativity (described in this book in Katja Dill’s contribution).43 This is another reason why algorithms, for example, can be quickly attributed to racist behavior, as Ruha Benjamin44 and Safiya Umoja Noble45, among others, have demonstrated impressively. In the context of machine learning/deep learning, the biggest problem is likely to be as outlined above: it is almost impossible to tell whether an algorithm is currently exhibiting discriminatory behavior, not only for users but increasingly also for creators.46
At the same time, however, data naturally also constitute their own (new) form of power structure: The control of large volumes of data, or the ability to evaluate complex and good data structures or to recognize patterns in them, change established power relationships and supplement (or partially replace) the means of production as the basis of power and the economy—here we can refer to Karl Marx.47 In the sense of an intangible economy, companies are taking on an enormous role in the global economy on an intangible basis—and this is primarily due to the large quantities of analyzable data. This data generates insights into individual, societal and political behavior48, it reveals invisible connections, makes accurate forecasts—and it can be used to persuade people, influencing behavior in a targeted and large-scale way. At the same time, we seem to be unwilling to acknowledge this. As Timothy Snyder points out, cognitive dissonance is an important issue here—by insisting on being the authors of our own actions, we provide alibis for digital creatures.49
The current situation is thus that algorithms have a major impact on individual and social life—and their ubiquitous dissemination means that they touch on various disciplines in scientific discourse. These different fields of consideration also open up the field of possible strategies of disobedience and allow for a wide-ranging discussion. If we speak of disobedience in this interdisciplinary field of tension, then the vagueness of the term in our context allows for quite different interpretations of the term. It includes the fact that disobedience also refers to the regulation of algorithms, i.e. by sparking a discourse on possible the legal consequences of an algorithmic power structure—also in order to make clear that this power is something that has to be negotiated socially and politically (which is discussed in this book, for example, in the contribution by Johanna Mellentin and Francesca Schmidt and also in Matthias Pfeffer’s contribution). The legal approach to algorithmic power is especially a topic of immense importance, since it is necessary, among other things, to shed legal light on the black box of algorithms—not least in order to exclude the potential discriminatory treatment of human beings (in the book, this discussion can be found, for example, in the contributions from Fabian Lütz and Victoria Guijarro Santos). Disobedience also involves a general discussion about the use of algorithms, and of course also the individual strategy for evading algorithmic power, even if algorithms are also used at different scales as instruments of surveillance. The enormous possibilities that arise, for example, for the surveillance of a society—with algorithms tracking people and their movement patterns in public space or in concrete urban spaces—must at least generate a debate about potential disobedience strategies. This means both that there should be discussion on a societal level about the use of such strategies (as in Lotte Houwing’s contribution) and that people (as, for example, Fabian Weiss, Moritz Ahlert and Bernd Friedrich Schon suggest in their contributions) develop individual strategies for disobedience. Especially in political activism, disobedience to algorithms can be a necessary strategy that makes protests possible in the first place, both in an urban space (here we refer to Carolin Höfler’s contribution) and in the digital space that serves mobilization and exchange. The importance of the concrete urban space for the formation and execution of a political protest has been explained many times, especially in the post-Tahrir years50, which are characterized by the occupation of public squares. Countless protest groups, from Occupy to Movimiento 15M, have used this method, creating an exciting field of discussion. An algorithmic monitoring of the public space is thus also the potential control of spaces for demonstrations, which in a democratic society should at least provoke a critical discourse.
The contributions in this book oscillate between the poles of obedience and disobedience. Again and again, the aim is to show the manifold, complex dependencies between data and algorithms as part of the reality of life (in the context of work, for example, this is done in Christina Hecht’s contribution), in order to then document strategies of disobedience that can be based on this (Harald Trapp and Robert Thum do this in their contributions, in which they lay out the dependencies between digital platforms and their data and urban space and the work commissioned there). The consideration of disobedience—we emphasize once again—is not meant to serve a general critique of algorithmic processes (whose comprehensive significance for the contemporary world we are of course aware of). Nor is it meant to be understood as a legitimatization of structures (the datafied world of work, for example, can also be viewed critically, even if there are possibilities for “resistance”). Furthermore, our discussions are also not to be understood as a direct injunction to commit acts of disobedience (we would prefer if this were not necessary), even if we attach great importance to the strategies for this. Rather, it is intended to focus on very fundamental techno-ethical questions and to initiate a comprehensive discourse, at the end of which concrete strategies of disobedience may indeed be posited. In this sense, the book’s contributions span both very concrete ideas and strategies and much more abstract considerations. The book begins on an epistemological level, where the implied problem of disobedience and obedience in the context of algorithms is laid out in various contexts, before that is discussed more empirically and analytically in the middle section of the book. From these considerations, the normative thoughts or the concrete descriptions of possible disobedience at the end of the book can then be well framed: Despite our academic context, we allow this book to end almost on an activist note. As will become apparent in the following pages, strategies for disobedience can be found on very different levels: In the individual act of resistance as well as in a broad political debate. It can concern very private areas, but equally important social issues. And it ranges from humorous interventions, which perhaps are rather meant to initiate a discussion, to strategies for ensuring survival or freedom in repressive regimes. What unites these different levels is that it is always about questioning and breaking the dominance of technologies. We hope to contribute to a discourse whose importance for a society in the process of digital transformation can hardly be overestimated.
Arendt, Hannah: Ziviler Ungehorsam. In: Id.: In der Gegenwart. Übungen zum politischen Denken II, Edited by Ursula Ludz, München 2000, p. 283–321.
Bartoletti, Ivana: An Artificial Revolution. On Power, Politics and AI, London 2020.
Benjamin, Ruha: Race After Technology: Abolitionist Tools for the New Jim Code, Hoboken 2019.
Braune, Andreas: Zur Einführung: Definitionen, Rechtfertigungen und Funktionen politischen Ungehorsams. In: Id. (Ed.): Ziviler Ungehorsam. Texte von Thoreau bis Occupy, Stuttgart 2017, p. 9–38.
Cirado-Perez, Caroline: Invisible Women: Exposing Data Bias in a World Designed for Men, London 2019.
Coeckelberg, Mark: AI Ethics, Cambridge/London 2020.
Dastin, Jeffrey: Amazon scraps secret AI recruiting tool that showed bias against women. In: Reuters, 10. Oktober 2018, https://www.reuters.com/article/amazon-com-jobs-automation-idINKCN1MK0AH(June 3, 2021).
Friedman, Batya; Nissenbaum, Helen: Bias in computer systems. In: ACM Trans. Inf. Syst. 14, 3, July 1996, p. 330–347. DOI: https://doi.org/10.1145/230538.230561
Fry, Hannah: Hello World. Was Algorithmen können und wie sie unser Leben verändern, Bonn 2019.
Graeber, David: The Democracy Project. A History. A Crisis. A Moment, London 2014.
Gros, Frédéric: Disobey. The Philosophy of Resistance. London/New York 2020.
Kleger, Heinz; Makswitat, Eric: Digitaler Ungehorsam. Wie das Netz den zivilen Ungehorsam verändert. In: FJ SB 4/2014, p. 8–17, http://forschungsjournal.de/node/2654 (July 1, 2021).
Lanier, Jaron: Who owns the future? New York City 2013.
Latour, Bruno: Reassembling the Social. An Introduction to Actor-Network-Theory, Oxford 2012.
Manovich, Lev: AI Aesthetics, Moscow 2018.
Matthias, Andreas: The responsibility gap: Ascribing responsibility for the actions of learning automata. In: Ethics and Information Technology 6, 2004, p. 175–183.
Merlot, Julia: Autonome Waffe könnte Menschen erstmals eigenständig angegriffen haben. In: Der Spiegel, 2.6.2021, https://www.spiegel.de/wissenschaft/technik/autonome-waffe-koennte-menschen-erstmals-eigenstaendig-angegriffen-haben-a-ad06b93d-191c-4a5f-b4ae-853e7a7a177d (June 2, 2021).
Mohamed, Abdelbaseer A.; van Nes, Akkelies; Salheen, Mohamed A.: Space and protest: A tale of two Egyptian squares. In: SSS10: Proceedings of the 10th International Space Syntax Symposium, London, UK, 13–17 July 2015, p. 110:1–110:18.
Noble, Safiya Umoja: Algorithms of Oppression. How Search Engines Reinforce Racism. New York City 2018.
Popitz, Heinrich: Phänomene der Macht, 2. Auflage, Tübingen 1992.
Powell, Brent: Henry David Thoreau, Martin Luther King Jr., and the American Tradition of Protest. In: OAH Magazine of History 9, no. 2, 1995, p. 26–29.
Pranz, Sebastian: Der Berliner Schlüssel. Bruno Latour und die Akteur-Netzwerk-Theorie. In: Döbler, Thomas; Rudeloff, Christian; Spiller, Ralf (Eds.), Schlüsselwerke der Kommunikationswissenschaft, Wiesbaden 2021, in press.
Prates, Marcelo O.R.; Avelar, Pedro H.; Lamb, Luís C.: Assessing gender bias in machine translation: a case study with Google Translate. In: Neural Comput & Applic 32, 2020, p. 6363–6381.
Rawls, John: A Theory of Justice, Cambridge 1999.
Reicherts, Jo: Von Menschen und Dingen. Wer handelt hier eigentlich? In: Poferl, Angelika; Schröer, Norbert (Eds.) Wer oder was handelt? Zum Subjektverständnis der hermeneutischen Wissenssoziologie, Wiesbaden 2014, p. 95–120.
Safransky, Sara: Geographies of Algorithmic Violence: Redlining the Smart City. Int. J. Urban Reg. Res., 44, 2020. p. 200–218. DOI: https://doi.org/10.1111/1468-2427.12833
Seemann, Michael: Eine beunruhigende Frage an den digitalen Kapitalismus. In: APuZ. Aus Politik und Zeitgeschichte 69, 2919, p. 10–15.
Sharp, Gene: From Dictatorship to Democracy, Boston 2010.
Snyder, Timothy: Und wie elektrische Schafe träumen wir Humanität, Sexualität, Digitalität, Wien 2020.
Thoreau, Henry David: Resistance to Civil Government. In: Myerson, Joel (Ed.): Transcendentalism. A Reader. Oxford 2000, p. 546–565.
Thumfart, Johannes: Der Demokrator. In: Die Zeit, Nr. 10/2011, https://www.zeit.de/2011/10/Gene-Sharp (April 11, 2021).
Zinn, Howard: The Zinn Reader: Writings on Disobedience and Democracy, New York City 2011.
Zweig, Katharina; Deussen, Oliver; Krafft, Tobias D.: Algorithmen und Meinungsbildung. In: Informatik Spektrum 40, 2017, p. 318–326.
1David Graeber sets this out impressively: Laws are legitimized—depending on the form of government—by a constitution, which in turn was legitimized by the people. Using the examples of the USA and France, Graeber reminds us that an act of then illegal violence brought the people into the situation of being able to legitimize a constitution in the first place. Laws were formed out of resistance. Cf. Graeber, David: The Democracy Project. A History. A Crisis. A Moment, London 2014, p. 237–239.
2He writes: “As soon as you say the topic is civil disobedience, you are saying our problem is civil disobedience. That is not our problem … Our problem is civil obedience. Our problem is the numbers of people all over the world who have obeyed the dictates of the leaders of their government and have gone to war, and millions have been killed because of this obedience. […] Our problem is that people are obedient all over the world, in the face of poverty and starvation and stupidity, and war and cruelty. Our problem is that people are obedient while the jails are full of petty thieves, and all the while the grand thieves are running the country. That’s our problem.” Zinn, Howard: The Zinn Reader: Writings on Disobedience and Democracy, New York City 2011, p. 405.
3In 1849, he wrote for example: “In other words, when a sixth of the population of a nation which has undertaken to be the refuge of liberty are slaves, and a whole country is unjustly overrun and conquered by a foreign army, and subjected to military law, I think that it is not too soon for honest men to rebel and revolutionize.” Thoreau, Henry David: Resistance to Civil Government. In: Myerson, Joel (Ed.): Transcendentalism. A Reader. Oxford 2000, p. 546–565. Here: p. 550.
4Cf. Powell, Brent: Henry David Thoreau, Martin Luther King Jr., and the American Tradition of Protest. In: OAH Magazine of History 9, no. 2, 1995, p. 26–29.
5Cf. Gros, Frédéric: Disobey. The Philosophy of Resistance. London/New York 2020, p. 126–127.
6Cf. Sharp, Gene: From Dictatorship to Democracy, Boston 2010. Sharp’s influence has already been the subject of extensive media coverage. See, for example: Thumfart, Johannes: Der Demokrator. In: Die Zeit, Nr. 10/2011, https://www.zeit.de/2011/10/Gene-Sharp (April 11, 2021).
7Vgl. Rawls, John: A Theory of Justice, Cambridge 1999.
8Vgl. Arendt, Hannah: Ziviler Ungehorsam. In: Id.: In der Gegenwart. Übungen zum politischen Denken II, Edited by Ursula Ludz, München 2000, p. 283–321.
9It should be noted that groups outside the democratic spectrum also repeatedlyinvoke morality to legitimize their acts of civil disobedience. In Germany, the Reichsbürger movement could serve as an example here, as could the right-wing extremist Identitarian movement, which has achieved relatively great media attention in German-speaking countries with acts of civil disobedience. In doing so, it repeatedly presents itself as a morally-driven movement on the one hand, and refers to strategies for nonviolent resistance on the other. Martin Sellner, probably the most important leader of the Identitarian movement in German-speaking countries, quotes Gene Sharp or Srdja Popovic, the co-founder of the Serbian protest movement Otpor! in his writings. This illustrates the complexity of moral justification in this context.
10These questions naturally also apply to protest in the digital society mentioned above. Prominent and controversial examples include the Internet activist Julian Assange, who uses his platform Wikileaks as a (digital) tool to draw attention to grievances, and the whistleblower Edward Snowden, whose revelations have shaken confidence in digital communication forever. In the public debate, the spectrum of opinions about the aforementioned activists ranges from almost cult-like veneration on the one hand to contempt as “lawbreakers” on the other. Cf. Kleger, Heinz; Makswitat, Eric: Digitaler Ungehorsam. Wie das Netz den zivilen Ungehorsam verändert. In: FJ SB 4/2014, p. 8–17, http://forschungsjournal.de/node/2654 (July 1, 2021).
11Defining these limits seems easier when a system of government differs greatly from current values and norms: In totalitarian systems such as National Socialism, resistance becomes, according to contemporary values, a civic duty. In modern societies, on the other hand, democratic mechanisms are generally provided for citizens to engage in critical discourse and civic or political participation, so that resistance to state power is also directed against a democratically legitimized community. The question of justification thus becomes an even more complex one. Cf. Braune, Andreas: Zur Einführung: Definitionen, Rechtfertigungen und Funktionen politischen Ungehorsams. In: Id. (Ed.): Ziviler Ungehorsam. Texte von Thoreau bis Occupy, Stuttgart 2017, p.9–38.
12There are innumerable forms of hostility against groups, so of course other discriminations could be mentioned as well. But racism in particular is of course an important point of discussion in the context of algorithmic discrimination, which is why we highlight it here and elsewhere in the introduction. But of course, racism is also of particular importance in the context of intersectionality. We cannot duly represent an explicitly intersectional perspective with our book either.
13Only recently a UN report triggered a discussion in the media about whether autonomous weapons had in fact already been deployed. Cf. Merlot, Julia: Autonome Waffe könnte Menschen erstmals eigenständig angegriffen haben. In: Der Spiegel, 2.6.2021, https://www.spiegel.de/wissenschaft/technik/autonome-waffe-koennte-menschen-erstmals-eigenstaendig-angegriffen-haben-a-ad06b93d-191c-4a5f-b4ae-853e7a7a177d (June 26, 2021).
14He writes: “[…] recommendation engines suggesting what we should watch, listen to, read, write, or wear; devices and services that automatically adjust the aesthetic of captured media to fit certain criteria; software that rates the aesthetic quality of our photos, etc.” Manovich, Lev: AI Aesthetics, Moscow 2018, p. 19–20.
15And at the same time, of course, they are changing the nature of cultural creation, because in the context of music, for example, it seems increasingly important that a song be algorithm-compatible.
16Cf. Reicherts, Jo: Von Menschen und Dingen. Wer handelt hier eigentlich? In: Poferl, Angelika; Schröer, Norbert (Eds.) Wer oder was handelt? Zum Subjektverständnis der hermeneutischen Wissenssoziologie, Wiesbaden 2014, p. 95–120. Here: p. 111–113.
17In his observations on the phenomena of power, sociologist Heinrich Popitz describes, among other things, a data-setting power through technical action. Even if Popitz does not explicitly refer to algorithms, these observations can be transferred very well: “In changing the object world, we set ‘data’ to which other people are exposed. We exercise a kind of materialized power, a data-setting power, in which the effect of the powerholder over the power-affected is mediated by objects. This effect can be unintentional, accidental, not predictable, or planned and deliberate.” (translation by the authors) Popitz, Heinrich: Phänomene der Macht, 2. Auflage, Tübingen 1992, p. 167.
18On the role of objects as agents of agency, Bruno Latour has made an interesting contribution with his Actor-Network-Theory: “If action is restricted a priori to what ‘intentional,’ ‘meaningful’ people do, it is hard to see how a hammer, a basket, a door closer, a cat, a rug, a mug, a list, or a pendant could act”. Latour, Bruno: Reassembling the Social. An Introduction to Actor-Network-Theory, Oxford 2012, p. 71. This approach becomes even more relevant the more objects are equipped with “intelligent” capabilities. As a result, every everyday object could in future theoretically be part of a larger network, so that our everyday actions will be co-determined by an unknown number of “actors” who make data available or collect it and reuse it in other contexts. These networks are in principle all-pervasive (“evasive”), untransparent (“opaque”) and exceed in their complexity what can be realized in the situation. Nevertheless, they structure our actions. Cf. Pranz, Sebastian: Der Berliner Schlüssel. Bruno Latour und die Akteur-Netzwerk-Theorie. In: Döbler, Thomas; Rudeloff, Christian; Spiller, Ralf (Eds.): Schlüsselwerke der Kommunikationswissenschaft. Wiesbaden 2021, in press.
19Strictly according to Latour, it would probably have to be argued that neither the algorithms, nor the programmers, nor even the users have the power, but that it is distributed dynamically in the network.
20Cf. Matthias, Andreas: The responsibility gap: Ascribing responsibility for the actions of learning automata. In: Ethics and Information Technology 6, 2004, p. 175–183.
21Cf. for example: Coeckelberg, Mark: AI Ethics, Cambridge/London 2020, p.100.
22Cf. Bartoletti, Ivana: An Artificial Revolution. On Power, Politics and AI, London 2020, p. 14.
23At this point, we must define our criticism clearly: The basic criticism that could be formulated here quickly reaches a limit. An algorithm—regardless of whether it is defined as AI or not—is first simply the subdivision of an action into a sequence of steps with the goal of solving a problem. An algorithm is therefore not per se something that should be viewed critically—the criticism always refers to the actions themselves.
24Cf. Bartoletti 2020, p. 123.
25There are various ways in which this learning process takes place exactly, but that is not important in the context of this book.
26Of course, large is relative and also depends on the complexity of the task. Simple image recognition—i.e. recognizing a cat in an image—an algorithm will probably be able to learn after a low four-digit number of good images.
27Since different brains, i.e. different neural networks, are better or worse suited depending on the problem, the structure of the network also plays an important role. Since corresponding frameworks, for example Tensorflow (https://www.tensorflow.org), are now available, the human activity required here cannot be referred to as “programming” in the true sense—it is actually more like “configuring”. This configuration influences the structure of the computer brain, but does not directly influence the action.
28However, the limits of a complete mapping of this range are then revealed by our reflection on the “perspectives”.
29Thus, there are formulations in the book that deviate from our understanding of an algorithmic mode of operation, and perhaps can also be criticized in the strict sense of computer science. More important to us, however, seems to be the implications for the matter at hand. The tension between “disobey” and “obey” is not only about how algorithms act, but, if we follow the principles of human-computer interaction and interaction design, about how people perceive these algorithms; thus, different perceptions are explicitly part of the overall perspective.
30For example, a recent poll had 51% of respondents answering that they “support reducing the number of national parliamentarians and giving those seats to an algorithm,” no doubt in part because there is not much understanding of their low performance among the general population. See: https://docs.ie.edu/cgc/IE-CGC-European-Tech-Insights-2021-%28Part-II%29.pdf (June 15, 2021).
31Bad is of course a complex word in this context: What is bad data? It can simply mean data that is not suitable for training an algorithm because it is incomplete or imprecise. Of course, the word bad also implies that the data does not reflect the complexity of reality for various reasons, or that these data represent an unfair status quo.
32Cf. Fry, Hannah: Hello World. Was Algorithmen können und wie sie unser Leben verändern, Bonn 2019, p. 184–201.
33In fact, the interesting question is: When exactly does acceptance tip over? In which context do users accept obvious errors? From the world of literature, we know the powerful description of Winston Smith’s horror in 1984 as he had to accept an obviously wrong result as the new truth: That 2 plus 2 equals 5. But is that so outlandish? Why do people drive their cars into rivers just because their navigation system says so?
34Cf. for example: Zweig, Katharina; Deussen, Oliver; Krafft, Tobias D.: Algorithmen und Meinungsbildung. In: Informatik Spektrum 40, 2017, p. 318–326.
35For example, the cheat devices uncovered during the so-called Dieselgate scandal in Germany clearly show: From the outside view of the driver, it is not apparent how the exhaust gas purification algorithm works in detail. But even the authorities responsible for testing have not noticed the exact way the algorithms work over a long time. The question of the extent to which those responsible in the companies were familiar with how the algorithms worked has not yet been conclusively clarified.
36Tracing authorship in such jokes is difficult. We refer to the tweet of the Twitter user @fasterthanlime from 2017. See: https://twitter.com/fasterthanlime/status/868840530813353985?s=20.
37Here, we should also briefly refer to the programmers, whose role in the process should certainly be discussed. The reference to problematic data is not sufficient to clarify the responsibility for non-biased systems.
38They use the example of a limited screen size that lets the algorithm make a selection.
39Cf. Friedman, Batya; Nissenbaum, Helen: Bias in computer systems. ACM Trans. Inf. Syst. 14, 3 (July 1996). p. 330–347. DOI: https://doi.org/10.1145/230538.230561.
40The exciting question is: Is there such a thing as neutral data? It is very likely that large amounts of data can at least be assumed not to represent a balanced picture of the world. Above all, however, algorithmic decisions should be based exclusively on aspects that are thematically relevant.
41The reference to discriminatory “redlining” is certainly relevant here. Especially in the context of the city and algorithms, new forms of this practice can of course emerge here. Cf. for example: Safransky, Sara: Geographies of Algorithmic Violence: Redlining the Smart City. Int. J. Urban Reg. Res., 44, 2020. p. 200–218. DOI: https://doi.org/10.1111/1468-2427.12833.
42When algorithms are trained with data, they naturally also adopt the status quo of this data. For example, if images, texts, or the like show primarily white men in certain positions of power, an algorithm can understand this as an unchanging status quo and try to reproduce it. Prominent examples of this kind are the much-discussed sexist recruiting AI of Amazon but also the sexism of Google Translate. Cf. Dastin, Jeffrey: Amazon scraps secret AI recruiting tool that showed bias against women. In: Reuters, 10. Oktober 2018, https://www.reuters.com/article/amazon-com-jobs-automation-idINKCN1MK0AH (Jun 3, 2021) and Prates, Marcelo O.R.; Avelar, Pedro H.; Lamb, Luís C.: Assessing gender bias in machine translation: a case study with Google Translate. In: Neural Comput & Applic 32, 2020. p. 6363–6381.
43Of course, unjust data is not only a problem in algorithms: There are countless examples of how data reinforces existing privileges or disadvantages marginalized people. Cf. for example Cirado-Perez, Caroline: Invisible Women: Exposing Data Bias in a World Designed for Men, London 2019.
44Vgl. Benjamin, Ruha: Race After Technology: Abolitionist Tools for the New Jim Code, Hoboken 2019.
45Noble, Safiya Umoja: Algorithms of Oppression. How Search Engines Reinforce Racism. New York City 2018.
46Depending on the context, checking the training data or controlling the algorithm can already be important steps. This can be explained with a simple example: If parameters in the usage of a system are changed (for example, the name, the place of residence or the gender) and the algorithm then changes its behavior, whether it should not do so, then problematic behavioral pattern could quickly be identified.
47With regard to the significance of the social means of production and the power over them, one could of course refer directly to Das Kapital. Because this cross-reference would go beyond the dimensions of this introduction, we refer here to the text by Michael Seemann, who writes about digital capitalism and also refers to Karl Marx. Cf. Seemann, Michael: Eine beunruhigende Frage an den digitalen Kapitalismus. In: APuZ. Aus Politik und Zeitgeschichte 69, 2919, p. 10–15.
48At this point, it should of course be pointed out that data and the use of the same through machine learning turn the consideration of the complex relationship between humans and computers, since users of services simultaneously become data suppliers and thus—as Jaron Lanier puts it—data models. Cf: Lanier, Jaron: Who owns the future? New York City 2013.
49Cf. Snyder, Timothy: Und wie elektrische Schafe träumen wir Humanität, Sexualität, Digitalität. Wien 2020, p. 41.
50A detailed note here would go beyond the scope of this introduction, because here both the detailed reference to the general importance of public space and the reference to the concrete urban space, with reference to Georges-Eugène Haussmann etc., would be important. But Tahrir Square in particular can be highlighted here, as the political events there in 2011 have influenced many subsequent protest movements. Cf.: Mohamed, Abdelbaseer A.; van Nes, Akkelies; Salheen, Mohamed A.: Space and protest: A tale of two Egyptian squares. In: SSS10: Proceedings of the 10th International Space Syntax Symposium, London, UK, 13–17 July 2015, p. 110:1–110:18.
“To live in a glass house is a revolutionary virtue par excellence. It is also an ecstasy, a moral exhibitionism that we very much need.”1
One of the central questions of modern democracies can be formulated as the paradox of how to serve freedom. Inherent in this paradox is a tension in which, depending on one’s situation and disposition, one must always find anew the right balance between obedience and disobedience to the rules set by the sovereign. This tense question of freedom shows itself all too easily in the context of our quarantine routine, as we can simply look at those attempting to escape, committing misdemeanors or even breaking the law, who try to attract attention. Today, one cannot talk about the subject of (dis)obedience without coming across the global phenomenon of the so-called ‘Querdenker’, literally “those who think across (or laterally or transversally)”. In the middle of the 19th century, the movement named “Freigeister” was still committed to a liberal-democratic agenda and saw itself supported by a broad consensus in the bourgeoisie, but already appears to Friedrich Nietzsche only as philistine folklore of old revolutionaries, to whom he opposed the immoral aestheticism of a future generation of “freie Geister”, “free spirits”. According to Nietzsche, this rare type of people is characterized by a pronounced individualism and a radical self-enlightenment which no longer bow to any idols, be they fellow human beings, the state, or even God himself. Against this background, the fact that today a wide variety of groups unite under the banner of ‘Querdenker’ may at first seem like another chapter in the successful modern history of individual self-assertion: For is it not basically still the state authority, which today again shows its ugly face, hidden behind care and welfare, only to rob its citizens of their most elementary rights, and some even of their livelihood? Is it not the first duty of citizens to resist and to serve their own freedom as well as the freedom of all by defending it against police restrictions in public? Are we not already living in a digital dictatorship by design, established by the controlling mania of the elites, who today use the new media of a barely visible algorithmization to make us slaves? In short: Is the modern state of today not merely the citizen-friendly side of a global biopolitics, whose profiteers remain themselves invisible—a conspiracy?
On the following pages I will take this basically paranoid view seriously, in order to illuminate its dark motives—behind it—as inevitable consequences of the modern dialectics of dis-obedience. What was described as a tension in the introduction has, with the development of our ‘algorithmic culture’, grown into an actual dialectic in which the role of design can no longer be underestimated. Design, however, means here a certain mindset, which functions as a motor of modernization, on whose material and idealistic effects both its proponents and its critics still draw from today.2 In order to understand the nature of this mindset, I will start with the Great Exhibition of 1851 in London. Here we will highlight certain essential features of modern thinking, which would, however, remain unrecognized in its Janus-facedness, if we did not at the same time step into an equally famous ‘underground’ of St. Petersburg. Fyodor M. Dostoevsky’s Notes from the Underground shows the dark side of a glorious rationalism and its creative optimism, as it were the underworld to this earthly paradise. Only this dialectical transition from light to shadow makes it possible to gauge the dangers modern citizens face when they obey the state laws of freedom or follow the wild call of liberty.
It was not the first industrial and commercial exhibition on the island, but it was the first on a global scale that Queen Victoria and Prince Albert ceremoniously opened in London’s Hyde Park on May 1, 1851. And in fact, the organizing committee had managed to set up an internationally attended and respected show of performance. Prince Albert himself had made it his task to bring the technical and artistic progress to England, to both represent the status quo as well as possible ways into the future. Thus he declared at a preparatory banquet: “Gentlemen,—the Exhibition of 1851 is to give us a true test and a living picture of the point of development at which the whole of mankind has arrived in this great task, and a new starting point from which all nations will be able to direct their further exertions.”3 A certain sense of mission that speaks from these words was hard to ignore even for contemporaries.
The fact that this project was realized in England may hardly come as a surprise regarding its pre-eminence as an industrial nation as well as a trading and colonial power. In this format of a Great Exhibition that still travels around the world today as “Expo,” the British sporting spirit of performance, competition and spectacle made its appearance in another arena, albeit initially as a kind of home match for the hosts (with about half of the exhibition space reserved for representatives of the Commonwealth). But other nations and principalities also took up the challenge and provided the showcase of a globalized industrial capitalism with all kinds of goods from their own production. During the almost five months of the exhibition, the approximately 6 million spectators walked along ranges of Indian spices, American agricultural machinery, or French haute couture. Yet, the real event and the most dazzling symbol of the entire occasion remained the exhibition building itself: the Crystal Palace.
This building had already been celebrated by contemporaries as an architectural wonder of the century, and even after its successor building fell victim to a fire in 1936, it was none other than Le Corbusier who wrote an epitaph for this building or this type of building. The historical significance of the Crystal Palace lay not just in the building itself, but in the sophisticated design concept that was extremely variable in scale: Joseph Paxton, its architect and at the time of its construction already a wealthy railroad shareholder and entrepreneur, was an inventive and experimental gardener to William Cavendish, the 6th Duke of Devonshire. Paxton at this time was already known (not least to the royal couple) for designing impressive green houses. Accordingly, the architecture of the Crystal Palace looked like an oversized greenhouse that could not only be easily erected and dismantled again, moreover it impressively symbolized the prosperity of modern life in an artificial atmosphere specially designed of glass and cast iron: “That with the Crystal Palace a revolution had taken place in the field of architecture was immediately recognized by clear-sighted contemporaries. The purely functionalist aesthetics of the building, the lavish use of the materials glass and iron, which until then had been considered luxurious, and the facilitated dissolution of limited space that this made possible, the interweaving or interpenetration of interior and exterior, thus also became the paradigmatic form of modern building.”4
Above all, the use of glass on such a massive scale turned into the expression of a modern mechanism of inclusion and exclusion. In the end, its effects could not hide the fact that the boundary between inside and outside continues to exist, even when an envious or pitying glance penetrates the façade. Even if one might believe in enlightenment and transparency,5 the glas still just reflects one’s own milieu. No other material like glass is able to make visible through its semi-permeability what Peter Handke once put into the formula of the inner world of the outer world of the inner world: that the asymmetry between one’s own inside and the other’s outside is never cancelled for the observer, but can only be determined from a higher vantage point.
If this still relates to the horizontal separation of spaces, insiders and outsiders, then a vertical isolation can also be studied at the Crystal Palace, which in turn relates to the biotope of humankind as such in contrast to our ‘natural’ environment. As Peter Sloterdijk once described it in the course of his literary inspection of the “world interior of capital,” an entry into the interior of the Crystal Palace is at the same time accompanied by the promise to turn the inhospitable outside more and more into a homely self-enclosure: “With its erection, the principle of the interior crossed a critical threshold: from then on, it meant neither the bourgeois or aristocratic dwelling nor its projection into the sphere of urban shopping arcades, rather, it set out to transpose the outside world as a whole into a magical immanence transfigured by luxury and cosmopolitanism. Having been converted into a great greenhouse and an imperial museum of culture, it betrayed the contemporary tendency to turn nature and culture together into indoor affairs.”6
Whereas Walter Benjamin, against whom the side blow “of urban shopping arcades” is aimed here, had only recognized a gigantic Parisian passage in the Crystal Palace, Sloterdijk, on the other hand, sees in it the concept of “kosmos” renewed. Even if transcendent assistance has failed us, the natural neediness and cleverness of humankind now puts into action what, as an extension of our own comfort zone, tends towards an infinite expansion. Admittedly, to speak with Benjamin: Even if “the world exhibitions are the pilgrimage to the commodity as fetish”, a “people’s festival” of capital, a transfiguration of “the exchange value of commodities”,7 all these descriptions still only highlight rather arbitrary features of that all-embracing development, which shapes the overall constitution of the modern life-world: “The world exhibitions [do not simply] build up the universe of commodities”,8 instead the crystal palace exhibits by itself the world itself—as a ‘universal’ commodity, as the one comprehensive, ‘turned-into-one,’ immanent good of human-creative self-transfiguration. What is fetishized here, not unlike in Marx, is at last man’s labor and creative power itself—hence the “Great Exhibitionism”.
The fact that it is possible to exchange at all, and not just to consume, is precisely one of the achievements of an inner-world expansion that relieves itself of the wear and tear of a hostile environment. It allows to take part in a general metabolism without losing shape, and it does so via an artificial membrane that enables reproduction of one’s own system according to one’s own rules, i.d. via autopoiesis.9 If one wants to call it luxury, then one should call this the relative surplus of an absolute luxury of survival. To propagate, on the other hand, the standards of a mere use-value would mean to be blind for the tendency inherent in the use of things itself to build up into ever more complex artifacts that inevitably participate in a general exchange of resources and materials. Exchange as opposed to use or consumption instead means postponement, differentiation, mediation, and refinement. Basically, it organizes the luxurious life of a deficient human being, a being that cannot leave anything in its natural place, since it does not possess a natural place itself, but must first establish “its own nature”.
You may call this either a luxury economy or an economy of scarcity, anyway, in the Crystal Palace exchange as the substitution of nature and culture is expressed in the purest degree, literally in the sense of crystallization: “For Plato, the crystal was the embodiment of the idea immersed in matter. Since the 19th century has chosen the crystal as the symbol of an order of reason arising naturally from history, not only a homogeneous world society spanning the entire globe is emerging, but also the horror of what Gehlen will then sarcastically call ‘cultural crystallization’—the fear of a time without history.”10 No less paradoxically than a spiritualized nature or a naturalized spirit, in the crystalline structure of the Crystal Palace a structure of space-time comes to light, which already grasps the brilliance of the beginning of modern history in the perfect symmetry of a petrified, eternal duration. Thus, from now on, it seems only possible to repeat on a large scale what has already taken shape on a small scale, in the microcosmic dimensions of the World Exhibition: the optimistic, enlightened cosmopolitanism of a technical utopia.
“The immense town, for ever bustling by night and by day, as vast as an ocean, the screech and howl of machinery, the railways built above the houses (and soon to be built under them) the daring of enterprise, the apparent disorder, which in actual fact is the highest form of bourgeois order, the polluted Thames, the coal-saturated air, the magnificent squares and parks, the town’s terrifying districts, such as Whitechapel with its half-naked, savage and hungry population, the City with its millions and worldwide trade, the Crystal Palace, the World Exhibition. […] You look at those hundreds of thousands, at those millions of people obediently trooping into this place from all parts of the earth—people who have come with only one thought, quietly, stubbornly and silently milling round in this colossal palace; and you feel that something final has been accomplished here—accomplished and completed.”11
Among the contemporary critics of the World Exhibition building, which had migrated from Hyde Park to Sydenham in the same year to be rebuilt there in 1854 on an even bigger scale, Fyodor M. Dostoevsky was at once the most tartly and the most sharp-sighted. When Dostoyevsky set out on his first trip to Europe via Paris, London, Geneva and Italy in 1862, the disillusionment that set in was almost indistinguishable from the ardor that was to find literary expression a year later, after his second trip to Europe. The results were his Winter Notes of Summer Impressions. One can hardly call it a travel diary, rather a reckoning in retrospect, which Dostoevsky wrote under the impressions he had gained of Western Europe. Above all, London, even before Paris, had shown itself as the epitome of a new world of droning machines, starving people and dark powers, with the Crystal Palace in the center like a newly constructed tower of a bourgeois Babel, erected with diabolical skill—nota bene a completed tower: “ It is a biblical sight, something to do with Babylon, some prophecy out of the Apocalypse being fulfilled before your very eyes. You feel that a rich and ancient tradition of denial and protest is needed in order not to yield, not to succumb to impression, not to bow down in worship of fact, and not to idolize Baal, that is, not to take the actual fact for the ideal…”12
One sees Dostoevsky wrestling with words imposed by this “grandeur of the idea”; provoking in him the anxious question, “Can this […] in fact be the final accomplishment of an ideal state of things?”13
