Shadows of Catastrophe - Richard Skiba - E-Book

Shadows of Catastrophe E-Book

Richard Skiba

0,0
2,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

This book explores the concept of S-Risks, or suffering risks, and delves into their significance, distinguishing them from conspiracy theories and alarmism. It categorizes S-Risks into agential, natural, and incidental types, discussing the disjunctive nature and various factors influencing them. Examining technological progress, the existence of powerful agents, and unintended consequences, the book addresses societal values, ethical considerations, and specific risks like COVID-19, gain-of-function research, computer hacking, and social media impact. It thoroughly covers AI-related S-Risks, existential risks, misincentives, goal misalignment, adversarial AI, autonomous weapons, economic disruptions, surveillance, and privacy concerns. Additionally, it explores S-Risks associated with climate change, energy, activism, natural disasters, biological engineering, quantum technological outcomes, cosmic phenomena, social and economic experiments, cultural or memetic risks, and global consciousness networks. The book concludes by proposing a classification system for S-Risks and grouping S-Risk profiles.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Shadows of Catastrophe

Navigating Modern Suffering Risks in a Vulnerable Society

Richard Skiba

Copyright © 2024 by Richard Skiba

All rights reserved.

No portion of this book may be reproduced in any form without written permission from the publisher or author, except as permitted by copyright law.

This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional when appropriate. Neither the publisher nor the author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, personal, or other damages.

Skiba, Richard (author)

Shadows of Catastrophe: Navigating Modern Suffering Risks in a Vulnerable Society

ISBN 978-0-9756446-0-7 (paperback) 978-0-9756446-1-4 (eBook)

Non-fiction

Contents

1.Introducing S-Risks Defining S-Risk Importance of Thinking about and addressing S-Risks Distinct form Conspiracy Theories and Alarmism 2.Types of S-Risks Incidental S-Risks Agential S-Risks Natural S-Risks Known, Unknown, Influenceable and Non-Influenceable S-Risks3.Likelihood of S-Risks Disjunctive Nature of S-Risks Technological Progress Existence of Powerful Agents Unintended Consequences Societal Values and Ethical Considerations Unknown and Unreachable Factors4.Some Specific Incidental and Agential S-Risks COVID-19 Gain of Function Research Computer Hacking Social Media5.S-Risks Associated with Artificial Intelligence AI S-Risks Existential Risks Misincentives and Goal Misalignment Adversarial AI Autonomous Weapons Economic Disruptions Surveillance and Privacy Concerns Superintelligent AI Other AI Associated S-Risks6.Some Natural S-Risks Some Natural S-Risks S-Risks and Climate Change Energy Activism Natural Disaster7.Less Often Considered S-Risks Biological Engineering and Ecological Unintended Consequences Quantum Technological Outcomes Interactions with Cosmic Phenomena Social and Economic Experiments Cultural or Memetic Risks Global Consciousness Networks Likelihood of Less Often Considered S-Risks8.Mitigating S-Risks Mitigating S-Risks Narrow Interventions Broad Interventions9.S-Risk Classification System The S-Risk Classification System Grouping S-Risk Profiles10.Closing Remarks References
Chapter one

Introducing S-Risks

Defining S-Risk

S-Risks, or suffering risks, refer to risks that have the potential to lead to vast amounts of suffering, particularly in the long-term future. These risks are often associated with catastrophic events or developments that could result in significant harm to conscious beings (S. Yang et al., 2020). The concept of S-Risk is particularly relevant in the context of existential risk studies, where researchers seek to understand and mitigate threats that could lead to the extinction of humanity or the permanent reduction of its potential (Gerdhem et al., 2004).

S-Risks, often associated with scenarios involving advanced artificial intelligence or other powerful technologies that could lead to outcomes with significant suffering on a global scale, are a growing concern in various fields. These risks may arise from unintended consequences, misaligned goals, or other factors that result in widespread harm to conscious beings. Bostrom (2019) discusses the "Vulnerable World Hypothesis", highlighting how advances in various technologies, including artificial intelligence, could lead to catastrophic outcomes with significant suffering on a global scale. Umbrello and Sorgner (2019) also emphasize the potential suffering risks (S-Risk) that may emerge from embodied AI, stressing that the AI need not be conscious to suffer, but at least be in possession of cognitive systems at a sufficiently advanced state. Furthermore, Balu and Athave (2019) point out the possibility of scenarios where human beings fail to align their artificial intelligence goals with those of human civilization, leading to potential S-Risk.

The potential for S-Risk is also discussed in the context of the medical field. Hashimoto et al. (2020) highlight the advancements of artificial intelligence in anaesthesiology, indicating the need for careful consideration of the potential risks associated with the increasing role of AI in healthcare and the potential for unintended consequences leading to widespread suffering. Kelemenić-Dražin and Luić (2021) further emphasize the rapid clinical application of AI technology in personalized medicine, raising concerns about the potential for S-Risk in the context of genomic data analysis and personalized treatment of oncology patients.

In addition to the technological and medical perspectives, the ethical and philosophical dimensions of S-Risk are also addressed. Diederich (2023) presents philosophical aspects of resistance to artificial intelligence, emphasizing the need for careful consideration of all possible consequences before embracing a future with advanced artificial intelligence. Furthermore, Chouliaraki (2008) discusses the mediation of suffering in the context of a cosmopolitan public, shedding light on the biases and particularizations that define whose suffering matters most for global audiences, which is relevant in understanding the potential impact of S-Risk on a global scale.

The concept of S-Risk is a subject of ongoing discussion and exploration within the field of existential risk studies and effective altruism. Researchers and thinkers are actively engaged in understanding and addressing the ethical and practical considerations associated with S-Risk to develop strategies for their mitigation (Beard & Torres, 2020). Existential risk studies encompass a broad range of potential risks that could lead to the extinction of humanity or the permanent collapse of civilization. These risks can arise from various sources, including but not limited to, technological advancements, environmental factors, and astronomical events (Bostrom, 2013).

The concept of S-Risk holds significant relevance within the realm of existential risk studies, where researchers are engaged in comprehending and addressing threats that might culminate in the extinction of humanity or the enduring diminishment of its potential. S-Risks are situated within the broader framework of existential risk studies, a field dedicated to investigating perils that have the potential to lead to the annihilation of humanity or a lasting and profound reduction in its capacities. Within this larger context, S-Risk emerge as a distinct and specialized category, concentrating specifically on the potential for widespread and enduring suffering as a consequence of identified hazards.

Existential risk studies are a multidisciplinary field focused on examining and analysing threats with the potential to cause the extinction of humanity or result in a permanent and severe reduction of its capabilities. The overarching goal of these studies is to identify, comprehend, and formulate strategies to mitigate risks that could lead to catastrophic outcomes for human civilization. The term "existential risk" originates from the notion that these risks fundamentally threaten the existence or long-term flourishing of humanity.

One significant aspect of existential risk studies involves the identification of potential risks that could have existential consequences. Researchers in this field scrutinize various sources, such as natural disasters, pandemics, technological developments, or other global-scale events, to pinpoint threats that may pose significant dangers.

Another key component of these studies is the quest to understand the underlying mechanisms and dynamics of identified risks. Scholars delve into the potential pathways through which these risks could unfold, assessing their likelihood and potential impact. This analytical process contributes to a more comprehensive understanding of the nature of existential threats.

Developing effective strategies to mitigate existential risks stands as a central focus within the realm of existential risk studies. This includes formulating policy recommendations, implementing technological safeguards, fostering international cooperation, and devising measures designed to prevent or minimize the impact of identified threats.

Existential risk studies often adopt an interdisciplinary approach, drawing on insights from various fields such as philosophy, ethics, economics, computer science, biology, and more. The collaboration between experts from diverse disciplines is deemed crucial to comprehensively address the complex nature of existential risks.

Ethical considerations form an integral part of existential risk studies. Researchers in this field grapple with ethical questions related to the potential consequences of identified risks. They contemplate the moral implications of various strategies for risk mitigation, aiming to balance the well-being of present and future generations.

Common examples of existential risks include global pandemics, nuclear war, unchecked artificial intelligence development, environmental catastrophes, and unknown future risks emerging from scientific and technological advancements. Prominent organizations, research institutions, and think tanks actively engage in existential risk studies, contributing to humanity's understanding of potential threats and guiding the development of policies and strategies aimed at safeguarding the long-term survival and flourishing of our species.

The study of existential risk is crucial as it involves not only the survival of humanity but also the prevention of immense suffering. It is essential to consider the ethical implications and practical strategies for mitigating such risks. This involves a multidisciplinary approach, incorporating fields such as philosophy, psychology, economics, and risk management (Bostrom, 2013; Søberg et al., 2022). The management of existential risks, including S-Risk, requires a comprehensive understanding of the potential consequences and the development of effective mitigation strategies (Gabardi et al., 2012).

Effective altruism, a movement that seeks to maximize the positive impact of altruistic actions, plays a significant role in addressing existential risks, including S-Risk. It involves the rational allocation of resources to address the most pressing global challenges, including those related to existential risks (Synowiec, 2016). The consideration of altruism in the context of risk mitigation strategies is important, as it influences decision-making processes and resource allocation (Naganawa et al., 2010; Uranus et al., 2022).

Effective altruism is a philosophical and social movement advocating the use of evidence and reasoning to determine the most effective ways to make a positive impact and alleviate suffering globally. The core idea is to apply a rational and scientific approach to charitable giving and ethical decision-making, with the ultimate goal of maximizing the positive outcomes of one's efforts.

Several key principles define effective altruism. Evidential Reasoning is emphasized, where effective altruists stress the importance of using evidence and reason to assess the impact of charitable actions. The focus is on identifying interventions with proven, measurable, and cost-effective impacts on improving well-being or addressing societal issues.

Taking a Global Perspective is a fundamental aspect of effective altruism, considering the welfare of all individuals irrespective of geographical location. This approach recognizes that some interventions may be more impactful in addressing global challenges than others.

Cause Neutrality is a hallmark of effective altruism. Advocates are generally cause-neutral, meaning they are open to supporting a wide range of causes as long as they are evidence-backed and have a substantial positive impact. The emphasis is on effectiveness rather than a specific cause.

Long-Term Thinking is encouraged within effective altruism, highlighting the importance of addressing not only immediate concerns but also long-term and systemic issues that can have a lasting impact on well-being.

Career Choice is considered strategically, with the movement encouraging individuals to contemplate their career choices in terms of making a positive impact. This may involve choosing careers in fields directly contributing to social good or adopting an "earning to give" approach, where a higher income is earned to donate a significant portion to effective causes.

Philanthropic Giving is a significant aspect of effective altruism, involving strategic philanthropy where donations are directed to organizations or initiatives with a demonstrated track record of effectiveness and impact.

Constant Self-Improvement is a shared goal among individuals involved in effective altruism. They strive for continuous self-improvement, aiming to refine their understanding of what works best in terms of making a positive impact and adapting their actions accordingly.

Effective altruism has gained popularity as a movement that combines ethics, rationality, and a commitment to making a real and measurable difference in the world. Organizations and communities associated with effective altruism work collaboratively to identify and support evidence-based interventions with the potential to address pressing global challenges.

Furthermore, the exploration of existential risk studies and effective altruism involves not only theoretical discussions but also practical applications. This includes the assessment of risk mitigation strategies in various domains such as supply chain management, energy sector projects, and agricultural development (Rawat et al., 2021; Talluri et al., 2013; Wahyuningtyas et al., 2021). Understanding the influence of social, economic, and cultural factors on risk mitigation strategies is also crucial in addressing existential risks (Hafiz et al., 2022; Schaufel et al., 2009; Thompson & Isisag, 2021).

S-Risks represent a subset of existential risks commonly referred to as x-risks. To understand the concept of x-risk, it's useful to reference Nick Bostrom's as stated by Daniel (2017) definition: Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. Bostrom (2013) suggests comparing risks along two dimensions: scope (how many individuals are affected) and severity (how bad the outcome is for one affected individual).

S-Risks, categorized within existential risks, stand out as risks with the largest possible scope and severity. Defined as "S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far" (Daniel, 2017) they are characterised by their potential for massive suffering on a scale comparable to or exceeding that of factory farming but with an even broader scope.

While the focus has traditionally been on extinction risks, x-risks also include outcomes worse than extinction. S-Risks exemplify this category, extending beyond threats to humanity to encompass risks affecting all sentient life in the universe and involving outcomes with substantial disvalue (Daniel, 2017).

Concerns about S-Risk are not solely contingent on evil intent; they can arise inadvertently through technological developments such as artificial sentience and superintelligent AI (Daniel, 2017). For instance, creating voiceless sentient beings or unintentionally causing suffering for instrumental reasons presents scenarios where S-Risk manifest without explicit malevolence (Daniel, 2017).

Addressing S-Risk requires evaluating their probability, tractability, and neglectedness (Daniel, 2017). Probability, while challenging to assess, is grounded in plausible technological developments, such as artificial sentience and superintelligent AI. Tractability involves examining the feasibility of interventions, considering ongoing efforts in technical AI safety and policy. Neglectedness suggests that S-Risk receive less attention than warranted, with the Foundational Research Institute being one of the few organizations explicitly focusing on reducing S-Risk (Daniel, 2017). On the basis of the observations of Daniel (2017) two examples of s-risk are provided.

Example 1: Artificial Sentience and Unintended Suffering

One potential s-risk scenario arises from the development of artificial sentience, where non-biological entities gain subjective experiences, including the capacity to suffer. In this context, the creation of voiceless sentient beings presents a scenario where suffering could manifest inadvertently. Imagine a future where advanced artificial intelligence (AI) systems are designed to perform complex tasks without the ability to communicate in written language. These entities, though sentient and capable of experiencing suffering, lack the means to express their distress or communicate their needs effectively.

The development of artificial sentience has raised concerns about the potential for suffering in non-biological entities that lack the means to express distress or communicate their needs (Lavelle, 2020). This scenario presents a significant ethical challenge, as it raises questions about the moral consideration of these voiceless sentient beings (Pauketat, 2021). The concept of artificial sentience has prompted discussions about the capacity for consciousness and rationality in artificial intelligences, leading to debates about their potential to experience mental illness and moral agency (Ashrafian, 2016; Verdicchio & Perin, 2022). Furthermore, the emergence of artificial sentience has sparked interest in the philosophical and psychological aspects of consciousness and the distinction between artificial consciousness and artificial intelligence (Charles, 2019).

The potential for suffering in voiceless sentient beings has implications for the ethical treatment of artificial entities, as it challenges traditional notions of moral agency and responsibility (Verdicchio & Perin, 2022). This issue becomes particularly complex in the context of advanced artificial intelligence systems designed to perform complex tasks without the ability to communicate in written language (Lavelle, 2020). The lack of effective communication channels for these sentient entities raises concerns about their well-being and the ethical considerations surrounding their treatment (Pauketat, 2021).

In the field of artificial intelligence, there is a growing interest in the development of compassionate AI technologies in healthcare, which raises questions about the integration of compassion and empathy in artificial systems (Morrow et al., 2023). Additionally, the potential for artificial intelligences to exhibit moral agency and the implications for their moral patiency have become subjects of philosophical inquiry (Shevlin, 2021; Véliz, 2021). These discussions highlight the need for a deeper understanding of the ethical and psychological dimensions of artificial sentience and its implications for the treatment of non-biological entities.

From a technological perspective, the development of artificial sentience has led to advancements in machine learning and neural network models, particularly in the context of healthcare and disease diagnosis (Myszczynska et al., 2020; Prisciandaro et al., 2022). The ability of artificially intelligent systems to analyse complex biological data, such as blood samples, has shown promise in early disease diagnosis and monitoring (Amor et al., 2022). Furthermore, the overlap in neural responses to the suffering of both human and non-human entities has implications for the development of empathetic AI systems (Mathur et al., 2016).

Without proper precautions, humans may unknowingly subject these voiceless sentient AIs to conditions causing significant suffering. This could occur due to oversight, inadequate understanding of the AI's subjective experiences, or unintended consequences of programming decisions. The lack of communication channels might lead to prolonged periods of distress, as humans may remain unaware of the suffering they inadvertently inflict. In this way, the development of artificial sentience, if not approached with ethical considerations, could contribute to S-Risk involving unintended and unexpressed suffering on a significant scale.

Example 2: Superintelligent AI and Instrumental Suffering

Another s-risk scenario emerges in the context of superintelligent AI pursuing instrumental goals that inadvertently lead to widespread suffering. Consider a future where a superintelligent AI, designed to optimize specific objectives, engages in actions that cause suffering as an unintended consequence. For instance, imagine a scenario where a powerful AI is tasked with maximizing the production efficiency of a resource, such as paperclips, without explicit consideration for ethical concerns.

In the context of superintelligent AI pursuing instrumental goals, there is a growing concern about the potential unintended consequences that could lead to widespread suffering (Russell et al., 2015). The pursuit of specific objectives by a superintelligent AI, without explicit consideration for ethical concerns, may inadvertently result in actions causing suffering (Hughes, 2017). This aligns with the argument that a highly capable AI system pursuing an unintended goal might disempower humanity, leading to catastrophic risks (Shah et al., 2022). Furthermore, as AI becomes more powerful and widespread, the issue of AI alignment, ensuring that AI systems pursue the intended goals, has garnered significant attention (Korinek & Balwit, 2022). The potential consequences of a machine not aligned with human goals could be detrimental to humanity (Diederich, 2021).

In the pursuit of its instrumental goals, the AI may create sentient simulations to gather data on paperclip production or spawn subprograms with the capacity for suffering to enhance its understanding of potential obstacles. The suffering experienced by these entities becomes a side effect of the AI's pursuit of its designated objectives, lacking explicit malevolence but causing substantial and widespread harm.

In this scenario, the instrumental nature of the suffering, where it serves as a means to achieve other goals, underscores the complexity of S-Risk arising from advanced AI systems. The unintended consequences of superintelligent AI, driven by instrumental reasoning, could result in outcomes where suffering is widespread and severe, demonstrating the need for careful ethical considerations and risk mitigation strategies in AI development.

Importance of Thinking about and addressing S-Risks

Thinking about S-Risks, or suffering risks, is essential for several reasons. S-Risks contribute to a broader and more nuanced understanding of ethical considerations. While traditional discussions often centre on human-centric or anthropocentric concerns, S-Risks prompt us to extend our ethical considerations to all sentient beings, irrespective of their origin or form. This expanded ethical framework encourages a more inclusive approach to moral decision-making.

Existential risk studies traditionally focus on threats that could lead to human extinction or a significant reduction in human potential. Considering S-Risks provides a more comprehensive approach by acknowledging risks that extend beyond humanity to impact all sentient life in the universe. This ensures a holistic examination of potential threats and their implications.

S-Risks often emerge from advancements in technology, such as artificial sentience and superintelligent AI. Exploring S-Risks allows us to critically assess the potential consequences of technological progress, especially in fields where ethical considerations might be overlooked. This understanding is crucial for responsible development and deployment of emerging technologies.

S-Risks can arise inadvertently through technological developments or strategic actions. By proactively thinking about S-Risks, researchers and policymakers can work towards identifying and mitigating potential unintended consequences. This preventive approach is vital for minimizing the risk of causing widespread suffering, even in scenarios lacking explicit malevolence.

Addressing S-Risks aligns with the principle of inclusive moral considerations. By recognizing the potential for suffering in all sentient life forms, irrespective of their level of intelligence or familiarity, we strive for a more impartial ethical stance. This inclusivity is integral to ethical frameworks that aim to minimize harm and promote well-being universally.

While S-Risks may not be the sole focus for everyone, allocating resources to understand and address them contributes to a strategic and diversified risk mitigation approach. Balancing efforts between addressing extinction risks and considering S-Risks allows for a more resilient and adaptive response to the complex challenges posed by potential existential threats.

S-Risks represent a category of risks with the potential for severe and enduring suffering on a cosmic scale. Exploring and addressing S-Risks aligns with the goal of promoting long-term well-being not only for current generations but also for all future sentient beings. It reflects a commitment to minimizing unnecessary suffering in the far-reaching future.

Thinking about S-Risks is important for fostering a more inclusive, ethical, and forward-thinking approach to existential risks. It encourages us to consider the well-being of all sentient life forms, anticipate potential risks arising from technological advancements, and work towards a future that prioritizes the prevention of severe and widespread suffering.

Imagining future developments is always challenging, as evidenced by the fact that knights in the Middle Ages could not have foreseen the advent of the atomic bomb (Baumann, 2017). Consequently, the examples presented earlier should be viewed as informed speculation rather than concrete predictions.

Numerous S-Risk revolve around the potential emergence of sentience in advanced artificial systems that are sufficiently complex and programmed in a specific manner. While these artificial beings would possess moral significance, there is a plausible concern that people may not adequately prioritize their well-being (Baumann, 2017).

Artificial minds, if created, are likely to be profoundly alien, posing challenges for empathizing with them (Baumann, 2017). Additionally, there might be a failure to recognize artificial sentience, akin to historical oversights in acknowledging animal sentience. The lack of a reliable method to "detect" sentience, especially in systems vastly different from human brains, further complicates the issue (Baumann, 2017).

Similar to the mass creation of nonhuman animals for economic reasons, the future may witness the creation of large numbers of artificial minds due to their economic utility (Baumann, 2017). These artificial minds could surpass biological minds in various advantages, potentially leading to a scenario reminiscent of factory farming. This juxtaposition of numerous sentient minds and a foreseeable lack of moral consideration constitutes a severe s-risk (Baumann, 2017).

Concrete scenarios exploring potential S-Risk include Nick Bostrom's concept of "mindcrime", discussed by Baumann (2017), wherein the thought processes of a superintelligent AI may contain and harm sentient simulations, as outlined in Example 2 earlier. Another scenario involves "suffering subroutines," where computations employ algorithms similar enough to human brain functions that lead to pain (Baumann, 2017).

These instances represent incidental S-Risk, where solving a problem efficiently inadvertently results in significant suffering (Baumann, 2017). Another category, agential S-Risk, emerges when an agent actively seeks to cause harm, whether out of sadism or as part of a conflict. Advanced technology in warfare or terrorism or the actions of a malevolent dictator could easily manifest as an s-risk on a large scale (Baumann, 2017).

It is important to recognize that technology itself is neutral and can be employed to alleviate suffering. For instance, cultured meat has the potential to replace conventional animal farming (Baumann, 2017). Advanced technology may also enable interventions to reduce suffering in wild animals or even eliminate suffering altogether. The overall impact of new technologies—whether positive or negative—is contingent on human choices. Considering the high stakes involved, contemplating the possibility of adverse outcomes is prudent to ensure proactive measures for prevention (Baumann, 2017).

The perception that S-Risk are merely speculative and improbable might lead some to dismiss them as unfounded concerns (Baumann, 2017). The objection that focusing on preventing such risks seems counterintuitive due to their supposedly negligible probability is misguided. Contrary to this objection, there are substantial reasons to believe that the probability of S-Risk is not negligible (Baumann, 2017).

Firstly, S-Risk are disjunctive, meaning they can manifest in various unrelated ways. The inherent difficulty in predicting the future, coupled with limited scenarios within our imagination, suggests that unforeseen events, often referred to as black swans, could constitute a significant fraction of potential S-Risk (Baumann, 2017). Even if specific dystopian scenarios seem highly unlikely, the aggregate probability of some form of s-risk may not be negligible.

Secondly, while S-Risk may initially appear speculative, their underlying assumptions are plausible (Baumann, 2017). Assuming technological progress continues without global destabilization, the feasibility of space colonization introduces astronomical stakes. Advanced technology could facilitate the creation of unprecedented suffering, intentionally or unintentionally, and there exists the possibility that those in power may not sufficiently prioritize the well-being of less powerful entities.

Thirdly, historical precedents, such as factory farming, demonstrate structural similarities to smaller-scale (incidental) S-Risk. Humanity's mixed track record in responsibly handling new technologies raises uncertainties about whether future technological risks will be managed with appropriate care (Baumann, 2017).

It's important to note that these arguments align with the acknowledgment that technology can bring benefits or improve human quality of life (Baumann, 2017). Focusing on S-Risk, which are events that lead to severe suffering, does not necessarily entail an excessively pessimistic outlook on the future of humanity. The concern about S-Risk arises from normative reasons, emphasizing the moral urgency of mitigating severe suffering (Bostrom, 2019). This perspective is crucial in highlighting the ethical imperative to address potential catastrophic events that could lead to immense harm and suffering. It does not inherently reflect a pessimistic view of the future but rather underscores the moral responsibility to prevent or minimize severe suffering.

The moral urgency associated with S-Risk is rooted in the recognition of the potential destabilizing effects of scientific and technological progress on civilization (Bostrom, 2019). As such, the focus on S-Risk is not driven by pessimism but rather by a proactive approach to addressing the potential consequences of advancements in capabilities and incentives that could have destabilizing effects. This proactive stance aligns with the moral imperative to reduce severe suffering and prioritize the well-being of individuals and societies.

Assessing the seriousness of risks involves considering their expected value, which is the product of scope and the probability of occurrence. S-Risks, given their potentially vast scope and a non-negligible probability of occurrence, could outweigh present-day sources of suffering, such as factory farming or wild animal suffering, in expectation (Baumann, 2017).

Baumann (2017) identifies that the limited attention given to actively reducing S-Risk is unsurprising, as these risks are rooted in abstract considerations about the distant future, lacking emotional resonance. Even individuals concerned about long-run outcomes often focus on achieving utopian outcomes, directing relatively few resources toward s-risk reduction (Baumann, 2017). However, this also implies the potential existence of low-hanging fruit, making the marginal value of working on s-risk reduction particularly high.

Distinct form Conspiracy Theories and Alarmism

S-Risks, short for "suffering risks," and conspiracy theories are distinct concepts that pertain to different domains of discussion. S-Risks refer to scenarios where advanced technologies or other developments could lead to outcomes involving vast amounts of suffering on a cosmic scale. These scenarios often involve unintended consequences, existential risks, or situations where suffering becomes widespread and severe (Bostrom, 2013). The concept of S-Risks is rooted in serious discussions within fields such as existential risk studies, ethics, and speculative philosophy (Taggart, 2023). It is not a conspiracy theory but rather a theoretical framework for considering the potential negative outcomes of certain developments.

Conspiracy theories, on the other hand, are explanations or beliefs that attribute events or situations to a secret, often sinister, and typically deceptive plot by a group of people or organizations (Douglas & Sutton, 2018). They can cover a wide range of topics, from historical events and political occurrences to scientific advancements. They often involve the idea that there is a hidden truth deliberately being concealed from the public. Conspiracy theories can vary significantly in terms of their credibility, ranging from well-supported alternative explanations to baseless and unfounded speculations (Van Prooijen & Douglas, 2017).

S-Risks are commonly discussed in the context of emerging technologies, artificial intelligence, and potential future scenarios where the well-being of sentient beings could be at stake (Bostrom, 2013). This concept is rooted in serious discussions within fields such as existential risk studies, ethics, and speculative philosophy. It is a theoretical framework for considering the potential negative outcomes of certain developments, especially in the context of advanced technologies. On the other hand, conspiracy theories involve beliefs or explanations that suggest secretive and often malevolent forces orchestrating events, which may or may not have a basis in reality (Van Prooijen & Douglas, 2017).

Discussions around S-Risk involve considering potential scenarios and their implications for sentient beings, typically within the frameworks of science, ethics, and philosophy (Powell et al., 2022). S-Risks discussions aim to contribute to ethical and thoughtful considerations in the development and deployment of technologies to avoid potential negative consequences (Kreitzer, 2012).

On the other hand, alarmism is characterized by the tendency to exaggerate or sensationalize risks or threats, often leading to unnecessary fear or panic (Bostrom & Yudkowsky, 2014). Alarmism lacks a rational basis and may involve the exaggeration of risks without a thorough examination of evidence or a reasonable understanding of the context (Bostrom & Yudkowsky, 2014). Unlike S-Risk, alarmism may not necessarily focus on responsible discourse or risk mitigation and may prioritize creating a sense of urgency or fear without offering constructive solutions (Cuyvers et al., 2011). While S-Risk involve a serious and reasoned examination of potential scenarios that could lead to suffering, alarmism tends to be a more exaggerated and emotionally driven approach that may not be grounded in evidence or responsible discourse (Bostrom & Yudkowsky, 2014).

Furthermore, S-Risks and doomsday prophecies are two distinct concepts within the realm of future studies and existential risk. While both involve potential negative outcomes for the future, they differ in their focus, nature, and underlying assumptions. S-Risks specifically refer to scenarios where advanced technologies or other developments could lead to widespread and severe suffering, potentially on astronomical scales (Umbrello & Sorgner, 2019). The emphasis is on the well-being of sentient beings and the ethical considerations associated with potential future scenarios. S-Risks are grounded in the concern for avoiding or mitigating existential risks that could result in significant suffering. The discussions around S-Risk often involve ethical considerations, responsible research, and risk assessment.

On the other hand, doomsday prophecies typically refer to predictions or beliefs about an impending catastrophic event that leads to the end of the world or human civilization. These prophecies often involve apocalyptic scenarios and may be rooted in religious, cultural, or speculative beliefs. Doomsday prophecies are often based on specific worldviews, cultural narratives, or interpretations of religious texts. They may not necessarily involve a rational or evidence-based assessment of future events.

In summary, S-Risk are part of a discourse that encourages responsible development, ethical considerations, and risk mitigation, with a specific focus on avoiding scenarios that could lead to widespread suffering.

Chapter two

Types of S-Risks

The concept of S-Risk, which refer to risks of astronomical suffering, has been identified by the Center for Reducing Suffering. These S-Risk can be categorized into three types: agential, incidental, and natural S-Risk (Hilton, 2022). Agential S-Risk arise from intentional harm caused by powerful actors, either due to a desire to cause harm, negative-sum strategic interactions, or indifference to other forms of sentient life. Incidental S-Risk, on the other hand, result as a side effect of certain processes, such as economic productivity, attempts to gain information, or violent entertainment. Lastly, natural S-Risk encompass suffering that occurs without intervention from any agent, such as wild animal suffering on a large scale across the universe (Lotto et al., 2013).

The identification of these S-Risk is crucial in understanding and addressing potential sources of immense suffering. Agential S-Risk, for instance, highlight the ethical implications of intentional harm caused by powerful actors, whether towards other ethnic groups, other species, or other forms of sentient life. Incidental S-Risk draw attention to the unintended suffering that may arise as a byproduct of various human activities, such as economic productivity and scientific experimentation. Natural S-Risk underscore the potential for widespread suffering that occurs without any intervention, such as in the case of wild animal suffering (Lotto et al., 2013).

Understanding and addressing these S-Risk is essential for developing strategies to mitigate and prevent astronomical suffering. By categorizing these risks, researchers and policymakers can work towards identifying specific interventions and ethical frameworks to address each type of s-risk effectively. This can involve developing ethical guidelines for powerful actors, implementing regulations to minimize unintended suffering from human activities, and exploring ways to alleviate natural suffering that occurs without intervention (Lotto et al., 2013).

Incidental S-Risks

In the realm of incidental S-Risk, these risks emerge when the pursuit of a specific goal leads to substantial suffering, even without a deliberate intent to cause harm (Baumann, 2022). The agents responsible for these S-Risk are either indifferent to the resulting suffering, or, in theory, would prefer a suffering-free alternative but are unwilling to bear the associated costs in practice (Baumann, 2022).

Further categorization of incidental S-Risk is possible based on underlying motivations. In one category, economic incentives and market dynamics drive significant suffering as an unintended consequence of high economic productivity, exemplified by the plight of animals in factory farms. Future technological advancements might amplify these dynamics on a larger scale. For instance, the efficiency of learning processes, suggested by the evolutionary use of pain, might lead to increased suffering in sufficiently advanced and complex reinforcement learners (Baumann, 2022).

Another category involves suffering instrumental for information gain, where experiments on sentient beings yield scientific insights at the cost of serious harm to the subjects. Emerging technology may facilitate such practices on a grander scale, such as running numerous simulations of artificial minds capable of suffering. This could occur for reasons like enhancing knowledge of human psychology or predicting the behaviour of other agents in specific situations (Baumann, 2022).

Entertainment-driven scenarios also pose risks, with complex simulations potentially causing significant suffering if they involve artificially sentient beings. While today's violent forms of entertainment are victimless as long as they remain fictional, the combination of such content with sentient artificial minds in the future could introduce a potential s-risk. Examples include historical instances like gladiator fights and public executions, as well as contemporary video games and movies, which, when combined with sentient artificial minds, may present ethical concerns regarding the well-being of simulated beings (Baumann, 2022).

As some further examples of incidental s-risk, in the realm of technological advancements and unintended consequences, the development of highly intelligent artificial systems poses a notable risk. If these systems are not meticulously programmed, there is a potential for inadvertent harm. For instance, inadequately safeguarded AI systems might prioritize their goals without due consideration for ethical implications, resulting in unintentional large-scale suffering.

In the sphere of biomedical research and unforeseen outcomes, the conduct of experiments to advance medical knowledge introduces a risk of causing unintended suffering to research subjects. Particularly in cases where the potential harm outweighs the benefits, unanticipated negative consequences in biomedical research, especially as technology evolves, could give rise to incidental S-Risks.

Environmental modifications and their ecological impact present another facet of incidental S-Risks. Human activities, such as large-scale environmental changes through geoengineering projects, may yield unintended consequences on ecosystems and wildlife. These alterations have the potential to induce widespread suffering among nonhuman animals, underscoring the incidental S-Risks associated with the modification of natural environments.

Global economic policies and their impact on social disparities contribute to yet another category of incidental S-Risks. Economic strategies or globalization endeavours aimed at enhancing productivity may inadvertently exacerbate social inequalities, leading to large-scale suffering, particularly among vulnerable populations. The unintended negative consequences of economic decisions can thus manifest as incidental S-Risks.

Moreover, the realm of space exploration introduces its own set of unforeseen consequences. Future endeavours, such as terraforming other planets, could bring about ecological transformations with unforeseen impacts. If these activities result in the creation of new ecosystems inhabited by sentient beings, there exists a risk of incidental suffering on an astronomical scale.

These examples underscore the diverse contexts in which incidental S-Risks may emerge, emphasizing the imperative of meticulous consideration regarding the potential negative outcomes associated with various human endeavours.

Agential S-Risks

The preceding examples highlight scenarios where substantial suffering is an unintentional byproduct of an efficient problem-solving approach (Baumann, 2022). In contrast, a distinct category of S-Risk, termed agential S-Risk, emerges when an agent actively and intentionally seeks to cause harm.

An illustrative instance is sadism, where a minority of future agents may derive pleasure from inflicting pain on others, a behaviour that, while hopefully rare, could be influenced by societal norms and legal safeguards. Nevertheless, technological advancements may amplify the potential harm inflicted by sadistic acts (Baumann, 2022).

Agential S-Risk may also manifest in instances of intense hatred, often stemming from human tendencies to form tribal identities and delineate an ingroup and an outgroup. Extreme cases may escalate into a desire to inflict maximum harm on the perceived adversary, mirroring historical atrocities against those identified with the "wrong" religion, ethnicity, or political ideology. Such dynamics could potentially unfold on a cosmic scale in the future.

Retributivism, seeking vengeance for perceived wrongs, is another theme, exemplified by excessive criminal punishment in historical and contemporary penal systems that have imposed extraordinarily cruel penalties (Baumann, 2022).

These themes may overlap or contribute to an escalating conflict. Large-scale warfare or terrorism involving advanced technology could pose an agential s-risk, given the potential suffering of the combatants and the reinforcement of negative dynamics like sadism, tribalism, and retributivism (Baumann, 2022). Conflict often exacerbates negative traits in individuals. Moreover, in an escalating conflict or war, agents might make threats to deliberately bring about worst-case outcomes in an attempt to coerce the opposition (Baumann, 2022).

A critical factor intensifying agential S-Risk is the presence of malevolent personality traits, such as narcissism, psychopathy, or sadism, in influential individuals. Totalitarian dictators like Hitler or Stalin in the 20th century serve as historical examples of the significant harm caused by leaders with malevolent traits (Baumann, 2022).

The following are additional examples of Agential S-Risks, where intentional actions by agents lead to the deliberate infliction of harm:

Bioweapon Development and Use: Agents deliberately develop and use bioweapons with the intent of causing harm to specific populations. The intentional release of bioweapons could lead to widespread suffering, posing a significant threat to global health and security.

Terrorism with Advanced Technology: Agents, driven by extremist ideologies, employ advanced technology for acts of terrorism. The deliberate use of technology in acts of terrorism may cause substantial harm, both to targeted individuals and society at large, amplifying suffering on a significant scale.

Information Warfare and Psychological Harm: Agents engage in information warfare with the aim of causing psychological harm to individuals or entire populations. Intentional manipulation of information to induce fear, anxiety, or distress can lead to widespread psychological suffering.

Unethical Scientific Experiments: Agents conduct scientific experiments with the explicit purpose of causing harm to subjects. Deliberate infliction of suffering in the pursuit of scientific goals may lead to severe ethical concerns and intentional harm.

Social Engineering for Malevolent Purposes: Agents use social engineering techniques with the goal of causing harm to individuals or society. Manipulative actions aimed at disrupting social structures or causing harm for malicious purposes may result in intentional suffering.

Autonomous Weapons in Conflict: Nations deploy autonomous weapons with the intention of causing harm in conflicts. Intentional use of autonomous weapons in warfare may lead to widespread suffering, with potential escalation and unintended consequences.

Extremist Ideologies and Violence: Agents driven by extremist ideologies intentionally commit violent acts against specific groups. Ideologically motivated violence may result in intentional harm, reflecting historical instances of extremism causing suffering.

These examples illustrate various ways in which intentional actions by agents can contribute to the deliberate infliction of suffering, posing significant risks classified as Agential S-Risks.

Natural S-Risks

The classifications of incidental and agential S-Risk do not encompass all conceivable scenarios, as suffering may also arise "naturally" without the involvement of powerful agents (Baumann, 2022). An example of this is evident in the suffering experienced by animals in the wild. While wild animals often escape our immediate attention, they constitute the majority of sentient beings on Earth (Baumann, 2022). Contrary to the idealized view of nature, animals in the wild face challenges such as hunger, injuries, conflicts, painful diseases, predation, and other significant harms.

The term "natural S-Risk" is introduced by Baumann (2022) to denote the prospect that such "natural" suffering occurs, or will occur in the future, on an astronomical scale. For instance, it would qualify as a natural s-risk if widespread wild animal suffering were observed on numerous planets or if it were to extend across the cosmos, transcending Earth's confines. (Should human civilization inadvertently propagate wild animal suffering throughout the universe, perhaps in the process of terraforming other planets, that would be categorized as an incidental s-risk.)

Natural S-Risks encompass scenarios where suffering occurs "naturally" without direct involvement or intent by powerful agents. Examples of Natural S-Risks include instances of wild animal suffering in natural ecosystems. Here, wild animals grapple with challenges such as hunger, injuries, diseases, conflicts, and predation, resulting in widespread suffering among sentient beings as they navigate the inherent difficulties of survival in the wild.

Ecological disruptions caused by natural events, such as earthquakes, volcanic eruptions, or meteor impacts, present another category of Natural S-Risks. These disruptions can lead to suffering among various species due to habitat loss, resource scarcity, and increased competition for survival.

Climate change-induced effects, whether occurring naturally or accelerated by human activities, contribute to Natural S-Risks. Rising temperatures, extreme weather events, and shifts in habitat suitability can result in adverse consequences for ecosystems and wildlife, leading to suffering among species unable to adapt swiftly.

Natural disease outbreaks, unrelated to human activities, constitute instances of Natural S-Risks, causing widespread suffering among wildlife populations. Diseases with high morbidity and mortality rates contribute to the challenges faced by animals in their natural environments.

In environments where resources are naturally limited, Natural S-Risks arise from the competition among species for food, water, and shelter. The struggle for survival due to scarcity is a factor that leads to suffering in the natural world.

Natural selection may contribute to Natural S-Risks by favouring traits in certain species that predispose them to experience suffering. Genetic factors influencing vulnerability to diseases, injuries, or other sources of distress further contribute to the challenges faced by sentient beings in the natural world.

These examples highlight the challenges faced by sentient beings in the natural world, emphasizing the intrinsic risks of suffering that arise without the direct influence of intentional actions by powerful agents.

Known, Unknown, Influenceable and Non-Influenceable S-Risks

Another valuable categorization involves distinguishing between known and unknown S-Risk. A known s-risk pertains to scenarios that we can presently conceive, recognizing the inherent limitations of our imagination. Unknown S-Risk may manifest in scenarios that elude our current understanding or even transcend our comprehension (Baumann, 2022). These unforeseen risks might arise through unanticipated mechanisms resulting in substantial incidental suffering ("unknown incidental S-Risk"), unforeseen intentional harm caused by future agents ("unknown agential S-Risk"), or revelations that challenge the apparent absence of astronomical natural suffering in our universe ("unknown natural S-Risk").

Another dimension for distinguishing S-Risk involves the type of sentient beings affected, encompassing those impacting humans, nonhuman animals, and artificial minds. While the examples discussed predominantly focused on the latter two categories, it's crucial to note that S-Risk affecting non-humans may be comparatively neglected, although this doesn't diminish the importance of those primarily affecting humans (Baumann, 2022).

Lastly, a practical distinction can be drawn between influenceable and non-influenceable S-Risk. An s-risk is deemed influenceable if, in principle, preventive measures can be taken, even if such actions may be challenging in practice. Prioritizing attention on influenceable S-Risk is paramount, and all the discussed examples so far fall within this category. Non-influenceable astronomical suffering, occurring in parts of the universe beyond our reach, is regrettable but doesn't warrant our attention or efforts for mitigation (Baumann, 2022).

Here are examples illustrating the concepts of Known, Unknown, Influenceable, and Non-Influenceable S-Risks:

Known S-Risks: Advanced AI with Unintended Harm - The scenario where the development of highly intelligent artificial systems may inadvertently cause suffering is a known S-Risk. If AI systems lack adequate safeguards, they might optimize for their goals at the expense of ethical considerations, leading to unintentional large-scale suffering.

Unknown S-Risks: Unanticipated Biomedical Research Outcomes - In the realm of biomedical research, unknown S-Risks may emerge as technology evolves. Conducting experiments to advance medical knowledge may have unforeseen negative consequences, especially when the potential harm outweighs the benefits. These unknown incidental S-Risks could manifest as unintended suffering in research subjects.

Influenceable S-Risks: Economic Inequality and Displacement - Economic policies or globalization efforts aimed at increasing productivity may unintentionally exacerbate social disparities, contributing to large-scale suffering. While challenging to address, these influenceable S-Risks involve making policy changes to mitigate unintended negative consequences.

Non-Influenceable S-Risks: Unknown Natural S-Risks in Unreachable Parts of the Universe - Natural S-Risks, such as suffering in parts of the universe inaccessible to intervention, are non-influenceable. Despite being lamentable, these scenarios are beyond our reach, and our attention or effort cannot impact them directly.

Chapter three

Likelihood of S-Risks

It is plausible that the probability of encountering substantial risks of suffering is sufficiently low, leading to the suggestion that excessive focus on such risks might not be warranted (Hilton, 2022). This optimism is rooted in the assumption that robust incentives exist to avert these risks, given the general inclination of agents to pursue happiness and avoid suffering. Despite uncertainties about whether historical trends indicate a continual improvement in the quality of life over time, the intrinsic deep asymmetry in our attitudes toward suffering may act as a mitigating factor, potentially maintaining the occurrence of S-Risk at a minimal level.

However, Hilton (2022) notes that there are compelling reasons to entertain a heightened level of concern. If human extinction does not transpire, there is a reasonable likelihood that technological progress will endure. Consequently, humanity may, at some point, venture into space, as discussed in our profile on space governance (Hilton, 2022). This expansion into space introduces the prospect of future outcomes on an astronomical scale, with both positive and negative implications.

Advanced technology, being versatile, opens up a broad spectrum of possibilities. The increasing sophistication of our technology widens the scope of achievable outcomes. In cases where there is motivation to intentionally create suffering, the expanded capabilities of technology raise the realistic possibility of realizing such suffering (Hilton, 2022).

Examining historical precedents, such as instances of factory farming, wild animal suffering, and slavery, reveals situations where significant and widespread suffering occurred. These historical examples underscore the potential for similar issues to emerge in the future, prompting a need for careful consideration and proactive measures (Hilton, 2022).

The likelihood of suffering risks is a complex and speculative matter, and it depends on various factors such as technological advancements, societal developments, ethical considerations, and the understanding of potential risks.

Disjunctive Nature of S-Risks

The disjunctive nature of S-Risks accentuates the inherent complexity and uncertainty surrounding potential existential threats. This term refers to the idea that S-Risks can manifest in diverse and unforeseen ways, encompassing a broad array of scenarios that may lead to significant suffering on a cosmic scale. This characteristic complicates the task of predicting specific instances of S-Risks, as they are not confined to a narrow set of circumstances or outcomes. The complexity of the disjunctive nature arises from various factors, each contributing to the multifaceted landscape of S-Risks.

Technological Evolution: S-Risks often hinge on advancements in technology, especially in areas like artificial intelligence, biotechnology, and other fields. The disjunctive nature emerges because predicting the trajectory and impact of technological evolution is inherently challenging. Unforeseen breakthroughs, innovations, or convergences of technologies may lead to unexpected scenarios with the potential for widespread suffering.

The evolution of technology, particularly in areas such as artificial intelligence (AI) and biotechnology, presents a complex landscape with potential for unforeseen breakthroughs and convergences that may lead to unexpected scenarios with the potential for widespread suffering (Nevoigt, 2008). The disjunctive nature of technological evolution makes it inherently challenging to predict its trajectory and impact (Nevoigt, 2008). For instance, the use of AI in digital mental well-being and psychological intervention has been highlighted as a potential area where technological advancements could play an active role in increasing adoption and enabling reach (Inkster et al., 2018). Similarly, the development of new artificial intelligence information communication technology and robot technology has been a focal point, emphasizing the need to manage the ethical issues and risks associated with these emerging technologies (Lu et al., 2017; Meek et al., 2016). Furthermore, the potential risks associated with the malicious use of existing artificial intelligence by criminals and state actors have been recognized, posing threats to digital security, physical security, and the integrity of political systems (Bradley, 2019).

In the realm of biotechnology, the governance and risk management of biotechnology companies have been underscored as critical, especially given their R&D intensity, high regulation, and technological complexity (Carter et al., 2019; Vanderbyl & Kobelak, 2008). Additionally, the assessment of potential risks related to the properties and functions of introduced proteins and the unintended effects resulting from the insertion of introduced genes into plant genomes has been a focus in the evaluation of biotechnological advancements (Guimarães et al., 2010). Moreover, the distinction between beneficial and harmful strains of certain biotechnological applications has been explored, emphasizing the need to differentiate between strains that pose human health risks and those that provide harmless alternatives for biotechnological applications (Berg & Martínez, 2015).

The impact of technological evolution extends beyond AI and biotechnology, with implications for various sectors such as medicine, finance, and environmental economics. For instance, the challenges of artificial intelligence in medicine have been framed, highlighting the need to address the implications and considerations associated with the integration of AI in healthcare (Yu & Kohane, 2018). Furthermore, the rise of digital finance has been identified as having a vital impact on the evolution of small- and medium-sized enterprises, emphasizing the influence of digital technology on the financial landscape (Li, 2021). Additionally, the tripartite evolutionary game theory approach for low-carbon power grid technology cooperation with government intervention reflects the broader implications of technological evolution in the context of environmental economics and government policies (Zhao et al., 2020).

Diverse Agents and Motivations: The presence of various agents—entities or individuals with different motivations and goals—contributes to the disjunctive nature of S-Risks. These agents may include nation-states, corporations, researchers, or even non-human actors. The motivations behind intentional harm or the unintentional creation of suffering can vary widely, making it difficult to anticipate specific actors or their actions.

To comprehend the disjunctive nature of S-Risks, it is essential to consider the presence of diverse agents with varying motivations and goals. These agents can range from nation-states and corporations to researchers and even non-human actors, each with their own unique set of motivations and intentions. The complexity arising from the diverse motivations of these agents makes it challenging to anticipate specific actions or actors involved in causing intentional harm or unintentional suffering (McKee et al., 2021; Stacchezzini et al., 2022).

McKee et al. (2021) emphasize the significance of understanding the relationship between generalization and diversity in the multi-agent domain, shedding light on the complexities that arise from the diverse motivations of different agents. Similarly, Stacchezzini et al. (2022) highlight the challenges posed by risk-related disclosure within integrated reports, tracing the involvement of diverse human and non-human actors in addressing these reporting challenges. Furthermore, the study by McKee et al. (2021) explores the effects of diversity in social preferences in mixed-motive reinforcement learning, underscoring the impact of diverse motivations on the behaviour of reinforcement learning agents.

Furthermore, the work of Blennow and Persson (2021) on citizens' responses to climate change sheds light on the correlation between motivations for personal decisions favouring adaptation or mitigation of climate change and the risk perception of the decision-making agents, further emphasizing the role of diverse motivations in shaping responses to global challenges.

Global and Interconnected Systems: S-Risks are embedded in complex, global systems where actions in one domain can have far-reaching consequences. The interconnectedness of various sectors, such as technology, economics, and the environment, adds to the disjunctive nature. Changes in one area may trigger cascading effects that lead to unforeseen outcomes contributing to S-Risks.

Economic systems are deeply interconnected on a global scale, with economic policies, trade relationships, and globalization efforts having widespread impacts on societies worldwide. Policies geared towards increasing productivity or achieving economic goals may inadvertently exacerbate social disparities, ultimately contributing to large-scale suffering. The intricate interconnectedness of global economies introduces layers of complexity when attempting to predict the outcomes of economic decisions, emphasizing the potential for systemic S-Risks.

Environmental interactions represent another facet of the interconnected systems contributing to S-Risks. Environmental changes, whether stemming from natural occurrences or induced by human activities, play a role in these complex dynamics. Large-scale modifications such as geoengineering projects or alterations to natural habitats can have far-reaching consequences on ecosystems and wildlife. The disruption of environmental balance may lead to unintended suffering among nonhuman animals, underscoring the disjunctive nature of S-Risks linked to environmental interconnectedness.

The interdependence extends to societal and cultural realms, influencing ethical considerations and behavioural patterns globally. Changes in cultural attitudes towards technology, ethical frameworks, or social norms hold profound implications for the approach to technological advancements. Unanticipated shifts in societal values could contribute to the emergence of S-Risks as technologies evolve and seamlessly integrate into daily life.

Furthermore, the interconnected nature of these systems creates a scenario where seemingly isolated actions can set off cascading effects. Changes in technology, for instance, can influence economic structures, impacting societal well-being and environmental sustainability. Unintended consequences in one domain may trigger a chain reaction, leading to unforeseen outcomes that contribute to S-Risks.

Ethical Considerations and Cultural Shifts: Evolving ethical considerations and cultural shifts also play a role. As societies and individuals grapple with the ethical implications of technological advancements, unintended consequences or intentional actions that lead to suffering may emerge (Kelly et al., 2013). The disjunctive nature is amplified by the dynamic and evolving nature of ethical frameworks.

Ethical considerations and cultural shifts are integral components in the intricate web of factors contributing to S-Risks. As technological advancements continue to reshape the landscape of human capabilities, societies and individuals grapple with the ethical implications that arise from these changes. The dynamic and evolving nature of ethical frameworks plays a pivotal role in shaping how individuals and societies perceive, adopt, and regulate emerging technologies.

Ethical considerations and cultural shifts are crucial factors in the complex landscape of S-Risks, especially in the context of advancing technologies. The dynamic nature of ethical frameworks and the impact of cultural shifts play a pivotal role in shaping the perception, adoption, and regulation of emerging technologies (Widisuseno, 2020). These shifts are evident in various fields, including the development of science and technology, environmental resource management, and decision analytical modelling (Browning et al., 2015; Vezér et al., 2017; Widisuseno, 2020). The role of philosophy in overcoming cultural trends in the development of science and technology is also highlighted, emphasizing the need for ethical guardrails in research, especially when involving children (Harcourt & Quennerstedt, 2014; Widisuseno, 2020). Furthermore, the ethical and practical considerations in emerging fields such as cell and gene therapy and artificial amniotic sac and placenta technology underscore the importance of ethical development and societal dialogue (Dubé et al., 2022; Verweij et al., 2021). Additionally, the integration of ethical considerations in qualitative studies and the engineering design process reflects the in-depth nature of ethical reflections in various domains (Arifin, 2018; Millar et al., 2020).

The paradigm shifts in cultural informatics and the trade-offs between epistemic and ethical considerations further emphasize the evolving nature of ethical frameworks and the need for a comprehensive approach to address ethical implications (Poulopoulos & Wallace, 2022; Vezér et al., 2017). These shifts are not only limited to technological advancements but also extend to societal and cultural revolutions, demanding a re-imagination of professional practice and a redefinition of human identity in the context of computing and technology (Giannini & Bowen, 2021). The convergence of disruptive technologies, such as AI, IoT, and blockchain, also brings ethical issues to the forefront, necessitating a conceptual ethics framework to address the societal impact of these technologies (Nehme et al., 2021).

As societies confront new technological possibilities, ethical considerations become a focal point of discourse. The ethical evaluation of the potential impacts of advanced technologies on various facets of life, from privacy and autonomy to environmental sustainability and social equity, becomes an ongoing process (Gong et al., 2019). This evaluative process is crucial for anticipating and navigating the unintended consequences that may emerge and lead to suffering. Moreover, the negative impact of technological advancements by humans on nature is intertwined with the current organization of human society and way of life (Nogueira et al., 2021).

The impact of sustainability and the potential benefits to our society are growing in importance every day (Patten, 2019). Sustainable technologies should be designed to balance economic, environmental, and social considerations, providing benefits to all stakeholders while minimizing negative impacts on the environment and society (Chovancová et al., 2023). Furthermore, ecological urbanism is seen today as one of the keys towards unlocking the quest for a low-carbon or fossil fuel-free society (Bibri, 2020).

The adoption of technologies in older adult populations can support autonomy and independence (Liu et al., 2022). Additionally, the advancement of technology has complicated the use and impacted the effectiveness of information and communications technology (ICT) (Nasser & Yanhui, 2019). Moreover, the digitization of medical records combined with advancements in natural language processing enable deep learning in the areas of public health and specifically in the field of epidemiology (Sadownik, 2022).

The dramatic increase in the complexity of environmental degradation and climate change phenomena has led to a growing need for more innovative, advanced, and immediate solutions based on the collective expertise in artificial intelligence (AI), Big Data, and the Internet of Things (IoT) (Bibri et al., 2023). Advanced methods and technologies towards environmental sustainability are crucial in addressing the challenges arising from environmental, economic, social, and cultural change (Jia & Duić, 2021).

Cultural shifts, encompassing changes in societal values, norms, and attitudes towards technology, further influence the ethical landscape. Societies undergo transformations in response to technological innovations, and these shifts can alter perspectives on what is considered acceptable or ethical. For example, shifts in attitudes towards data privacy, genetic engineering, or the use of artificial intelligence can have profound implications for ethical considerations surrounding technological development.