6,99 €
The normalisation of hate speech, including antisemitic rhetoric, poses a significant threat to social cohesion and democracy. While global efforts have been made to counter contemporary antisemitism, there is an urgent need to understand its online manifestations. Hate speech spreads easily across the internet, facilitated by anonymity and reinforced by algorithms that favour engaging--even if offensive--content. It often takes coded forms, making detection challenging.
Antisemitism in Online Communication addresses these issues by analysing explicit and implicit antisemitic statements in mainstream online discourse. Drawing from disciplines such as corpus linguistics, computational linguistics, semiotics, history, and philosophy, this edited collection examines over 100,000 user comments from three language communities. Contributors explore various facets of online antisemitism, including its intersectionality with misogyny and its dissemination through memes and social networks. Through case studies, they examine the reproduction, support, and rejection of antisemitic tropes, alongside quantitative assessments of comment structures in online discussions. Additionally, the volume delves into the capabilities of content moderation tools and deep-learning models for automated hate speech detection. This multidisciplinary approach provides a comprehensive understanding of contemporary antisemitism in digital spaces, recognising the importance of addressing its insidious spread from multiple angles.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Veröffentlichungsjahr: 2024
ANTISEMITISM IN ONLINE COMMUNICATION
Antisemitism in Online Communication
Transdisciplinary Approaches to Hate Speech for the Twenty-first Century
Edited by Matthias J. Becker, Laura Ascone, Karolina Placzynta and Chloé Vincent
https://www.openbookpublishers.com
©2024 Matthias J. Becker, Laura Ascone, Karolina Placzynta, and Chloé Vincent (eds). Copyright of individual chapters is maintained by the chapter’s authors.
This work is licensed under an CC BY 4.0 Attribution 4.0 International. This license enables reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. CC BY includes the following elements: credit must be given to the creator. Attribution should include the following information:
Matthias J. Becker, Laura Ascone, Karolina Placzynta, and Chloé Vincent (eds), Antisemitism in Online Communication: Transdisciplinary Approaches to Hate Speech for the Twenty-first Century. Cambridge, UK: Open Book Publishers, 2024, https://doi.org/10.11647/OBP.0406
Copyright and permissions for the reuse of many of the images included in this publication differ from the above. This information is provided in the captions and in the list of illustrations. Every effort has been made to identify and contact copyright holders and any omission or error will be corrected if notification is made to the publisher.
Further details about CC BY licenses are available at http://creativecommons.org/licenses/by/4.0/
All external links were active at the time of publication unless otherwise stated and have been archived via the Internet Archive Wayback Machine at https://archive.org/web
Any digital material and resources associated with this volume will be available at https://doi.org/10.11647/OBP.0406#resources
We acknowledge support by the Open Access Publication Fund of Technische Universität Berlin
ISBN Paperback: 978-1-80511-260-0
ISBN Hardback: 978-1-80511-261-7
ISBN Digital (PDF): 978-1-80511-262-4
ISBN Digital eBook (EPUB): 978-1-80511-263-1
ISBN HTML: 978-1-80511-265-5
DOI: 10.11647/OBP.0396
Cover image: Photo by Marc Bloch, 2023, CC-BY. Cover design: Jeevanjot Kaur Nagpal.
Acknowledgements
Introduction
1. The Cases of Riley and Rooney
Karolina Placzynta
2. Jordan Peterson and Conservative Antisemitism Online
Matthias J. Becker
3. ‘Pop’ Antisemitism and Deviant Communities
Alexis Chapelan
4. “More Like Genocide”
Matthew Bolton
5. Countering Antisemitism Online
Laura Ascone
6. Multimodal Cognitive Anchoring in Antisemitic Memes
Marcus Scheiber
7. Discussion Trees on Social Media
Chloé Vincent
8. Algorithms Against Antisemitism?
Elisabeth Steffen, Milena Pustet, Helena Mihaljević
About the Authors
List of Figures
List of Tables
Index
This volume was produced in the context of the research project Decoding Antisemitism: An AI-Driven Study on Hate Speech and Imagery Online, the pilot phase of which was generously funded by the Alfred Landecker Foundation.
Furthermore, this open access volume was made possible by the support of the TU Berlin Publication Fund.
We express our profound appreciation to Open Book Publishers for their indispensable support throughout the entire publication journey.
Our heartfelt thanks go out to the authors of this compendium, consisting of the committed members of the Decoding Antisemitismresearch team.
©2024 Matthias J. Becker et al., CC BY 4.0 https://doi.org/10.11647/OBP.0406.00
This book showcases key findings and analyses of an innovative research project in the field of web-related antisemitism studies. Established at the Centre for Research on Antisemitism (ZfA) at the Technische Universität Berlin in 2020, Decoding Antisemitism: An AI-Driven Study on Hate Speech and Imagery Online1 brings together researchers from different disciplines with the aim of exploring the patterns of antisemitic communication on social media. Each researcher has brought their particular experiences, insights and interests from the fields of semiotics—including linguistics (semantics and pragmatics) and image analysis—(social) media studies, history, as well as political and social sciences. Such collaboration ensures that the analyses of the online datasets collected as part of the project have been detailed, nuanced and comprehensive.
At the same time, each of the researchers has been making additional observations, in part thanks to the scope and richness of the dataset: the multitude of topics it contains, the varied angles they can be viewed from, and the multiple overlaps and differences between antisemitic hate speech and many other pertinent phenomena. So far, the joint project-related publications have not been able to completely reflect this diversity of both research interests and discourse phenomena. In this volume, we finally provide a space for broader conclusions from the analysis of current expressions of online antisemitism within the political mainstreams of the UK, Germany and France, but also in exploratory studies in relation to the US as well as to other, more extremist online discourse, carried out within this research project from 2020 to 2024.
The eight studies in this volume are therefore not just an extension of the work within the project, but a product of the interdisciplinary format of Decoding Antisemitism—a format designed to explore a complex object of study (antisemitism), produced in varied patterns (user statements in online threads) in a highly dynamic sphere of communication (the interactive web) that in many ways remains a black box, notoriously difficult to illuminate. Intensifying the efforts in describing, raising awareness of, preventing and regulating online antisemitism, and online hate more generally, is an urgent task not only because of its kaleidoscopic multiplicity and evolving nature, but also because its various expressions seem to be increasing in both number and strength (Zannettou et al. 2018). This became particularly evident after the attacks perpetrated by Hamas on 7 October 2023 (CST 2023, RIAS 2023, SPCJ 2023). However, even such a noticeable trend is difficult to capture fully with the analytical methods available so far, due to this complexity present at the different levels.
The first level is the communication space of the interactive web, which “has dramatically changed the very time/space axes of the subject’s existence” (Kramsch 2009: 159). Comment sections are the core dialogical spaces, where web users address each other as well as an imaginary audience, similar to that of mass media (Virtanen and Kääntä 2018). They can interact with people from across the globe in a spontaneous and immediate manner, reproducing oral interactions (Ko 1996, Herring 2010). As a result, their language can differ markedly from traditional written text. The online comment genre is also characterised by a certain fluidity, which can come across as “less correct, complex and coherent than standard written language” (Herring 2008: 616). At the same time, online communication has gained a new type of complexity, enriched, influenced and modified by hashtags, memes and other multimodal elements. It is also affected by various more general conditions online that have a long-term effect on our communication behaviour (Troschke and Becker 2019, see also Schwarz-Friesel 2013). Web users have the possibility to remain anonymous: this identity distance contrasts with accelerated, intensified, and sometimes even escalated communication processes. Everything can be said, at any time; the outlook of being sanctioned or even prosecuted for online statements has existed for only a short time. The fact that explosive sources and radicalising content are accessible at all times and locations further reinforces this escalation. All these aspects or conditions of communication have a lasting effect on the way we behave on the internet, but also on how we think and feel, and thus perceive the world in its entirety. The internet now functions as an amplifier, which “increases our potential for good and productive work as well as for inappropriate and immoral endeavors” (Banschick and Banschick 2003: 161).
On social media, web users may be exposed to various and sometimes conflicting viewpoints (Bakshy et al. 2015). However, this exposure does not necessarily result in bridging the divides; instead, web users tend to perceive these divergencies as a threat to their own identities and outlooks. This can lead them to either avoid the confrontation (John and Dvir-Gvirsman 2015) or attack the differing points of view (Mor et al. 2016). They also tend to seek out sources confirming their existing opinions (Stroud 2011, Monnier and Seoane 2019, Wolleback et al. 2019), and to join virtual communities which already share their interests and points of view. Even though the notion of such echo chambers is starting to come under critique (Arguedas et al. 2022), several researchers nevertheless propose their existence (Matuszewski and Szabó 2019, Wolleback et al. 2019). Echo chambers strengthen both the bonds among the web users and the ideologies they express (Pariser 2011), a polarisation which may become particularly dangerous when the ideologies circulating within these communities are hate ideologies, as they may lead to an increased dehumanisation of the Other through the language they employ (Pacilli et al. 2016, Cassese 2019). The spread of hate speech is facilitated, again, by the sense of anonymity such online milieus create (Mondal et al. 2017), which in turn escalates the expression of hateful and exclusionary ideas which web users may not have articulated in offline interactions (Schwarz-Friesel 2013).
Normalisation of hate speech informs the second level of the intricate phenomenon at hand. As hate speech spreads from extremist milieus (Ebner 2023) into mainstream communication, the boundaries of what can be said without fear of condemnation from one’s peers, or banishment from publicly shared spaces are pushed ever further. This emboldens individuals to express hatred in online spaces more frequently and more freely; through repetition, hate-speech fallacies and stereotypes, they create new discourse norms, often mirrored by official or legal regulations. Statements by public figures and internet celebrities explicitly or implicitly encouraging hate can boost and accelerate this process, even as online discourse can equally quickly turn against them. Despite the efforts invested in moderating online communications, the amount of data is so vast that it is difficult for the various platforms to track all the hate speech content. Furthermore, to avoid detection by human or automated moderators, but also to convey messages in an attractive manner, web users resort to regularly updated discursive strategies, such as wordplay, allusions and coded memes.
The effect of normalised verbal violence can perhaps be felt in the rise in physical violence (Saha, Chandrasekharan and De Choudhury 2019, Müller and Schwarz 2020). In recent years, its increased presence has at the very least correlated with the radicalisation of social and political movements and counter movements, as well as political groups (Tappin and McKay 2019) or segregating tendencies through extreme polarisation. It also coincides with the trend of dehumanisation of out-groups and invisibilisation of suffering. When analysing hate speech online, it is difficult, if not impossible, to determine whether the speaker intended to hurt the target. Therefore, in both the project and this volume we adopted the INACH definition of cyber hate, which includes both intentional and unintentional discriminatory statements.2
Antisemitism, the third level of the object of our study, is a chameleon-like hate ideology which has kept morphing and adapting throughout its existence over two millennia (Wistrich 1992, Bergmann 2016; for the distinction between anti-Judaism and antisemitism see Julius 2010, Williams and Wald 2023). From anti-Judaism in times of Christianity to the racially charged antisemitism of modernity, two further forms were added in the twentieth century: secondary (post-1945) and Israel-related antisemitism, which prove how highly complex and adaptable this hate ideology can be, embedding itself in various social and political milieus, and now also thriving online (for secondary antisemitism, see Becker et al. 2024). On the one hand, the conceptual (i.e. content-related) repertoire of antisemitism has become broader; on the other, classical stereotypes such as deicide, greed, evil or mendacity3 have been partly or entirely modernised. The antisemitic notion of Jewish greed (and partly also immorality) has been updated to the idea that Jews or Israel exploit the Holocaust in order to achieve pecuniary or symbolic gains. This new framing has been achieved via the concept of instrumentalisation, of either antisemitism or the Holocaust, centrally anchored in secondary antisemitism. Similarly, the classical concept of innate Jewish evil is now being applied to Israel, in particular in the form of the Nazi analogy. These two instances demonstrate how versatile antisemitism is, and how highly compatible it seems to be with a wide spectrum of political positioning and social environments.
Antisemitism is not only a threat to Jewish communities but is also one of the greatest challenges to social cohesion and the future of democracy, as hatred of Jews often correlates with a resentful attitude and a simplistic binary worldview pitting a supposedly homogenous ‘us’ against a destructive and malign ‘them’ in the arena of politics, the media, as well as in academia and science.4 Moreover, and in stark contrast to other forms of hate, the continuing impact of contemporary antisemitism seems to be dismissed and misunderstood—as shown, for example, by the long gestating but broadly unnoticed antisemitism within the UK left, which finally emerged onto the public domain during Jeremy Corbyn’s leadership of the Labour Party (see the various studies on Labour antisemitism and Jeremy Corbyn; for David Miller, the academic in Bristol accused of spreading conspiracy theories regarding Israel, see Becker et al. 2021). This culture of debate is all too attached to the political positioning or educational background of the person, group or party in question, and loses sight of antisemitism in the process. A similar pattern occurred in the Documenta 15 art exhibition in the German city of Kassel in the summer of 2022, when multimodally conveyed hostility towards Jews was trivialised or indirectly justified through the idea of cultural relativism; the art sector displayed a gross lack of understanding of the subject and simplified, dichotomous world views (see Ascone et al. 2022, Burack 2023).
A sudden awakening in the political and media context could then be observed when fears of a rise in antisemitism (and other hate ideologies) online arose as a result of Elon Musk’s takeover of Twitter (now X), as he announced a reduction in content moderation and a significant cutback in collaborations with the political and academic sectors (Miller et al. 2023; see also Jikeli and Soemer 2023). The antisemitic death wishes and overt conspiracy theories voiced by Kanye West, a successful musician and influencer with a gigantic following, proved that antisemitism has found its place in the mainstream and cultural sector of the West (Chapelan et al. 2023). Repercussions of these events are of international proportions and will not fuel various fires in the US discourse alone; they have an enormous impact on the presence and openness of antisemitism on social media worldwide, which makes hatred of Jews permissible and brings it back to the streets. It is precisely this mainstream antisemitism that—partly camouflaged in its communicative guise, partly legitimised by the speaker’s social position—has the potential to spread throughout society, and is therefore far more dangerous than that hostility towards Jews by radicalised fringe groups, which is rejected from the outset and (in certain cases) sanctioned.
In addition to the complexity of the virtual, dialogue-based communication space and of language, the object of study itself thus poses major hurdles for research-based examination and counter-strategies within the realms of politics and civil society.
At a global level, numerous countries and institutions have taken steps to counter hate speech and antisemitism. The past few years saw the implementation of the Loi Avia and NetzDG, in France and Germany respectively. According to the latest report by the European Union Agency for Fundamental Rights (FRA),5 14 European countries have already implemented NetzDG measures in order to tackle antisemitism, while eight countries are currently developing new strategies to adopt. Likewise, the Institute for Strategic Dialogue (ISD), together with B’nai B’rith International and the United Nations Educational, Scientific and Cultural Organization (UNESCO), has provided a toolkit to help civil society tackle antisemitism online.6 The Digital Services Act (DSA) is a legislative proposal put forth by the European Commission in December 2020. The aim of the DSA is to regulate digital services and online platforms within the European Union (EU) to ensure a safe and accountable digital environment for web users.7 Furthermore, the Inter-Parliamentary Task Force to Combat Online Antisemitism has recently organised two summits, in Washington, DC (September 2022) and Brussels (June 2023), in order to promote an ongoing dialogue between lawmakers and social media platforms.
Despite the national and international efforts to understand and tackle antisemitism online, various gaps are becoming visible. It is imperative to reflect more deeply on how antisemitic discourse comes about and is circulated in the first place, as language is the most important vehicle of any ideology (Althusser 1970 [2011], Pêcheux 1975). Particular attention needs to be paid to the seemingly acceptable, usually unsanctioned dog whistles or implicit and coded forms that are difficult to detect and can therefore spread into politically moderate (online) milieus. This approach will help to understand the impact of online antisemitism on contemporary social, political and cultural contexts and practices in different language communities and to develop counter-strategies against corresponding trends.
The political and legal actors are not the only ones dealing with antisemitism online. Academic researchers and organisations using digital methods are also committed to shedding more light on the issue. Among others, the Anti-Defamation League (ADL) and the Institute for Strategic Dialogue (ISD) monitor and analyse antisemitism in the United States and Europe respectively, aiming at providing tools to counter this hate ideology both online and offline. Coming from different disciplines, researchers investigate this phenomenon from very discrete angles: from studies on Hungarian Jewish Displaced Persons (Barna 2016) to research on anti-Jewish conspiracy theories (Finkelstein et al. 2020).
The interactive web has generated an incredibly large amount of data. Due to the relatively large presence of hateful content, various new techniques have been developed to track antisemitism and other hate ideologies. The institute CyberWell collects antisemitic statements posted online and offers the possibility to report them to the different social media platforms; ADL and Zannettou et al. (2020) use vector analyses to investigate antisemitism on platforms such as 4chan and Gab. Meanwhile, the London-based Community Security Trust, in collaboration with Signify, has been analysing antisemitic hate speech on Twitter with the use of machine learning.
Qualitative approach to the study of antisemitic web comments has received little attention so far. The goal of these analyses is to examine the way antisemitism is expressed explicitly and/or implicitly, as well as to identify linguistic patterns that might have gone unnoticed when adopting a quantitative approach only (see Schwarz-Friesel 2019, Becker 2021). Furthermore, some of these qualitative studies have been conducted to develop and improve algorithms that would better detect antisemitic content online. In this context, corpus linguistics (Gries 2009, Leech 2014) proves to be a good methodology for investigating the different forms of antisemitic expressions. By collecting a large amount of original data from the web, it is possible to identify the linguistic characteristics specific to online antisemitic discourse as well as to determine its statistically significant features.
In order to achieve more solid results, some researchers have adopted mixed-method approaches. Jikeli and Soemer (2023) highlight the importance of combining quantitative and qualitative analyses when studying phenomena as complex as online hate speech. Similar approaches have been employed to closely examine antisemitic content in popular social media, such as X (formerly Twitter) (Jikeli et al. 2014), Facebook and YouTube (Allington and Joshi 2020). In the context of the Decoding Antisemitism project, Mihaljević et al. (2023) have tested Google’s tool Perspective API, which uses machine-learning models to identify abusive web comments and provide a score of toxicity, with the goal of assisting readers and moderators in tackling hate content. These tests, conducted on large corpora of data collected from mainstream media, provide new and additional insights to the analysis of online antisemitism in extreme milieus (Hübscher and von Mering 2022).
The pilot project Decoding Antisemitism is based at the Centre for Research on Antisemitism at the Technische Universität Berlin, carrying out research in close collaboration with the HTW (University of Applied Sciences) in Berlin, and with the support of HateLab at Cardiff University and King’s College London. The project seeks to find new, technologically enhanced ways to identify and analyse antisemitism online, in both its explicit and disguised forms. As mentioned at the start of this introduction, it has brought together an international, interdisciplinary team of expert researchers with the goal of investigating the frequency, content and structure of antisemitic hate speech posted on mainstream news websites and social media platforms in the UK, France and Germany.
At the core of the analyses presented in the chapters of this anthology is the project’s research design and the data collected in its course (more than 130,000 comments from the three language communities). Contrary to the approach adopted in many of the existing studies into hate speech, here the collection of the data is not based on a list of keywords such as ‘Israel’ or ‘Jews,’ but rather on news events that are likely to trigger antisemitic reactions. Such events include—to name but a few—the escalation phase in the Arab-Israeli conflict in May 2021, the war in Ukraine and Kanye West’s antisemitic remarks, which have strongly influenced the online debate culture in Europe as well. The threads—i.e. comment sections of news websites and their official social media platforms—were fed into the analysis while retaining their chronological and dialogue structure. The analysis is based on a mixed-method approach: first, the data is examined within the framework of Mayring’s qualitative content analysis (2015). Here, the experts’ annotation follows a classification system developed for the purposes of this research project, which comprises both deductive and inductive categories (Meibauer 2008), depending on the patterns that emerge in the data studied. The categories in the classification system comprise both classic and new forms of antisemitic concepts (Schoeps and Schlör 1996, Julius 2010), as well as the linguistic and multimodal phenomena employed by web users in the analysed comment sections. For the context-sensitive analysis of a comment within a thread, this means that each statement is examined in terms of content (above-mentioned concepts) as well as form (explicitly vs. implicitly communicated), and care is also taken to consider any references to the article topic as well as other user comments.
The results of these qualitative analyses then form the basis of algorithms that replicate the experts’ decisions and are intended to enhance the detection of antisemitic content on the internet to a completely new level. The iterative process between experts from the fields of humanities and social sciences on the one hand and data science experts on the other will shift the in-depth qualitative analysis to a much broader scale, so that disparately larger amounts of data can be categorised in a reliable way. The findings obtained in the previous steps also form the basis of quantitative analyses in order to identify statistically significant patterns, completing the picture of trends in contemporary antisemitism.
The chapters collected in this anthology reflect the project’s research design. While the research is based on a solid foundation of traditional antisemitism studies, as well as seminal works from the fields of linguistics, semiotics, history and philosophy, it is innovative in terms of both the data used for analysis, and the approach applied to it. The studies presented here employ empirical analysis of content published in the comment sections of online news outlets and different social media platforms in the past few years. This is crucial for a body of work that emphasises the characteristics of currenthate speech expressions, and of online hate speech in particular. The fact that it has been sourced from platforms within the political mainstream makes it highly relevant as well: while there is, naturally, a great value to the study of extremist milieus (Barna and Knap 2019, CST 2019, Zannettou et al. 2020, ADL 2021, Hübscher and von Mering 2022), our focus is on the discourse that can directly impact the majority of web users in the language communities we explore. Moreover, so-called mainstream antisemitism poses an enormous challenge not only for academic analysis, but also for Jewish communities and society as a whole. While recent antisemitic shootings in Pittsburgh, Halle and Poway are clearly rejected across society, antisemitism in politically moderate contexts—in art, culture and academia—is all too often minimised, as the position of the discourse absolves it of antisemitism. The results presented in this publication make it clear that this is a misguided judgement. In this respect, the chapters are to be understood as a plea to take a closer look at this desideratum in the context of web-related antisemitism studies and hate studies in general.
Owing to the integrative nature of the Decoding Antisemitism project, the authors of the work presented in this collection have also been able to incorporate a similarly interdisciplinary approach into their individual research. In doing so, they offer a comprehensive view of the issues they focus on, which enriches their findings and creates interest for a wider audience. It is also mindful of the frameworks of examination, where the subject matter is treated in a holistic and intersectional manner and operationalised within its methodologically rigorous analysis. In terms of content analysis, it focuses on conceptual units as well as the linguistic and visual patterns carrying these units. Finally, the data is analysed both qualitatively and quantitatively—the former still being underrepresented in the field of internet studies. By reflecting the current reality of contemporary antisemitic hate speech online in mainstream discourses, and by analysing its ability to remain hidden in plain sight by continuously adapting to the current context, this anthology aims to give a full picture of contemporary antisemitism on every level: in terms of its mixed-methods approach, the cross-disciplinary outlook, and the wide range of themes encompassing media, society and culture.
The volume begins with the development of selected conceptual questions in the context of antisemitism studies, which are presented on the foundation of our empirical analysis of language data. Karolina Placzynta explores the intersections of antisemitism and misogyny in online debates around public figures (Chapter 1). Next, we present linguistic and discourse analytical case studies centred on the reproduction, support and rejection of antisemitic tropes: Matthias J. Becker examines the dividing line between conservative and far-right antisemitism by analysing projections onto Jordan Peterson, a conservative intellectual, after interviewing the Israeli PM (Chapter 2); Alexis Chapelan’s study shows the way web users express their support to contested media personalities such as Dieudonné and Kanye West (Chapter 3). Matthew Bolton investigates the concept of GENOCIDE and its use in the context of the discourse around the Arab-Israeli conflict, a topic that has been of intense interest in the wake of the 7 October attacks and Israeli retaliation in Gaza (Chapter 4), while Laura Ascone assesses the links between the web comments conveying antisemitism and those countering it, and how counter-narratives can sometimes fuel antisemitism and other forms of hate speech (Chapter 5). We also include the emergence of new forms of hate speech: this aspect is examined by Marcus Scheiber in his qualitative analysis of antisemitic memes and the potential of verbal and visual elements to mutually integrate antisemitism into online communication (Chapter 6).
The qualitative analyses are complemented and enriched by quantitative assessments prepared by Chloé Vincent, who looks at the structure of the comment trees in online discussions in relation to the occurrence of antisemitic comments, using the dataset accumulated in the project so far (Chapter 7). Finally, to integrate research questions from the field of data science, Elisabeth Steffen, Milena Pustet and Helena Mihaljević elaborate on recent work regarding the capabilities of content-moderation tools in recognising antisemitic posts as toxic, and report on current achievements in training deep-learning-based models for automated detection of such content (Chapter 8).
Across all the chapters, the authors use numerous examples from the project dataset; they have been taken from the comment sections of mainstream news outlets of the UK, France and Germany. The examples have been anonymised; however, in order to present the data as faithfully as possible, they retain their original spelling, punctuation and grammar, including any errors, inconsistencies or offensive terms. Whenever French or German comments are used to illustrate the text, they have been translated into standard British English, with the original provided in footnotes. The list of specific sources of the examples can be found at the end of each chapter.
The frequent mentions of antisemitic concepts, such as stereotypes and analogies, are presented in small caps, in accordance with the conventions of cognitive linguistics, which uses this format to highlight phenomena that exist on the mental level and can be reproduced through language. Linguistic phenomena, such as irony, puns or death wishes are not distinguished in such a way.
Finally, the chapters will make reference to Decoding Antisemitism—A Guide to Identifying Antisemitism Online (Becker et al. 2024)—a publication also linked to the Decoding Antisemitismproject. It is a comprehensive guide to both the explicit and coded forms of contemporary antisemitism, including traditional and modern concepts which have been clearly organised, defined and illustrated with a diverse audience in mind. It is an extension of the classification system used in the project, and therefore a useful framework of reference for the studies in this volume.
ADL (Anti-Defamation League). 2021. Gab and 8chan: Home to Terrorist Plots Hiding in Plain Sight,https://www.adl.org/resources/reports/gab-and-8chan-home-to-terrorist-plots-hiding-in-plain-sight
Allington, Daniel and Tanvi Joshi. 2020. “What others dare not say”: An antisemitic conspiracy fantasy and its YouTube audience. In: Journal of Contemporary Antisemitism 3 (1): 35–54
Arguedas, Amy Ross, Craig T. Robertson, Richard Fletcher and Rasmus Kleis Nielsen. 2022. Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review,https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review#header—0
Ascone, Laura, Matthias J. Becker, Matthew Bolton, Alexis Chapelan, Jan Krasni, Karolina Placzynta, Marcus Scheiber, Hagen Troschke and Chloé Vincent. 2022. Decoding Antisemitism: An AI-Driven Study on Hate Speech and Imagery Online. Discourse Report 4. Technische Universität Berlin. Centre for Research on Antisemitism, https://doi.org/10.14279/depositonce-16292
Bakshy, Eytan, Solomon Messing and Lada A. Adamic. 2015. “Exposure to ideologically diverse news and opinion on Facebook”. In: Science348 (6239): 1130–1132
Banschick, Mark R. and Josepha Silman Banschick. 2003. “Children in cyberspace”. In: Shyles, Leonard (ed.). Deciphering Cyberspace: Making the Most of Digital Communication Technology: 159–199
Barna, Ildikó and Árpád Knap. 2019. “Antisemitism in contemporary Hungary: Exploring topics of antisemitism in the far-right media using natural language processing.” In: Theo-Web 18 (1): 75–92
Becker, Matthias J. 2021. Antisemitism in Reader Comments: Analogies for Reckoning with the Past. London: Palgrave Macmillan
Becker, Matthias J., Daniel Allington, Laura Ascone, Matthew Bolton, Alexis Chapelan, Jan Krasni, Karolina Placzynta, Marcus Scheiber, Hagen Troschke and Chloé Vincent. 2022. Decoding Antisemitism: An AI-Driven Study on Hate Speech and Imagery Online. Discourse Report 2. Technische Universität Berlin. Centre for Research on Antisemitism, https://doi.org/10.14279/depositonce-15310
Becker, Matthias J., Hagen Troschke, Matthew Bolton and Alexis Chapelan (eds). 2024. Decoding Antisemitism: A Guide to Identifying Antisemitism Online. London: Palgrave Macmillan, https://link.springer.com/book/9783031492372
Bergmann, Werner. 2016. Geschichte des Antisemitismus. Munich: Beck
Bergmann, Werner and Rainer Erb. 1986. “Kommunikationslatenz, Moral und öffentliche Meinung. Theoretische Überlegungen zum Antisemitismus in der Bundesrepublik Deutschland”. In: Kölner Zeitschrift für Soziologie und Sozialpsychologie38 (2): 223–246
Burack, Cristina. 2023. “Documenta 15 trivialized antisemitism, report finds”. Deutsche Welle (10 February 2023), https://www.dw.com/en/documenta-15-trivialized-antisemitism-report-finds/a-64663005
Cassese, Erin C. 2021. “Partisan dehumanization in American politics”. In: Political Behavior43: 29–50
Chapelan, Alexis, Laura Ascone, Matthias J. Becker, Matthew Bolton, Jan Krasni, Karolina Placzynta, Marcus Scheiber, Hagen Troschke and Chloé Vincent. 2022. Decoding Antisemitism: An AI-Driven Study on Hate Speech and Imagery Online. Discourse Report 5. Technische Universität Berlin. Centre for Research on Antisemitism, https://doi.org/10.14279/depositonce-17105
CST (Community Security Trust). 2019. “Engine of hate: The online networks behind the Labour Party’s antisemitism crisis”. CST Blog,https://cst.org.uk/news/blog/2019/08/04/engine-of-hate-the-online-networks-behind-the-labour-partys-antisemitism-crisis
CST (Community Security Trust). 2023. Antisemitic Incidents Report 2023,https://cst.org.uk/news/blog/2024/02/15/antisemitic-incidents-report-2023
Ebner, Julia. 2023. Going Mainstream: How Extremists Are Taking Over. London: Bonnier Books.
Gadet, Françoise. 2010. “Enjeux de la langue dans l’analyse du discours”. In: Semen 29: 111–123
Herring, Susan C. 2010. “Computer-mediated conversation Part I: Introduction and overview”. In: Language@ Internet 7 (2)
Jikeli, Günther, Damir Cavar and Daniel Miehling. 2019. Annotating Antisemitic Online Content. Towards an Applicable Definition of Antisemitism,https://arxiv.org/abs/1910.01214
Jikeli, Günther and Katharina Soemer. 2023. “The value of manual annotation in assessing trends of hate speech on social media: Was antisemitism on the rise during the tumultuous weeks of Elon Musk’s Twitter takeover?” In: Journal of Computational Social Science,https://doi.org/10.1007/s42001-023-00219-6
John, Nicholas A. and Shira Dvir-Gvirsman. 2015. “‘I don’t like you any more’: Facebook unfriending by Israelis during the Israel–Gaza conflict of 2014.” In: Journal of Communication65 (6): 953–974
Julius, Anthony. 2010. Trials of the Diaspora. A History of Anti-Semitism in England. Oxford: Oxford University Press
Ko, Kwang-Kyu. 1996. “Structural characteristics of computer-mediated language: A comparative analysis of InterChange discourse”. In: Electronic Journal of Communication/La revue électronique de communication6 (3)
Kramsch, Claire J. 2009. The Multilingual Subject: What Foreign Language Learners Say About Their Experience and Why It Matters. Oxford: Oxford University Press
Matuszewski, Paweł and Gabriella Szabó. 2019. “Are echo chambers based on partisanship? Twitter and political polarity in Poland and Hungary”. In: Social Media + Society5 (2), https://doi.org/10.1177/2056305119837671
Mondal, Mainack, Leandro Araújo Silva and Fabrício Benevenuto. 2017. “A measurement study of hate speech in social media”. In: Proceedings of the 28th ACM Conference on Hypertext and Social Media, http://people.cs.uchicago.edu/~mainack/publications/hatespeech-ht-2017.pdf
Miller, Carl and David Weir, Shaun Ring, Oliver Marsh, Chris Inskip, Nestor Prieto Chavana. 2023. “Antisemitism on Twitter before and after Elon Musk’s acquisition”. ISD, https://www.isdglobal.org/isd-publications/antisemitism-on-twitter-before-and-after-elon-musks-acquisition
Monnier, A. and A. Seoane. 2019. “Discours de haine sur l’internet”. In: Publictionnaire. Dictionnaire encyclopédique et critique des publics,http://publictionnaire.huma-num.fr/notice/discours-de-haine-sur-linternet/
Mor, Yifat, Neta Kligler-Vilenchik and Ifat /Maoz. 2015. “Political expression on Facebook in a context of conflict: Dilemmas and coping strategies of Jewish-Israeli youth”. In: Social Media + Society1 (2), https://doi.org/10.1177/2056305115606750
Müller, Karsten and Carlo Schwarz. 2020. Fanning the Flames of Hate: Social Media and Hate Crime (5 June 2020), https://ssrn.com/abstract=3082972 or http://dx.doi.org/10.2139/ssrn.3082972
Nirenberg, David. 2013. Anti-Judaism. London: Head of Zeus
Pacilli, Maria Giuseppina, Michele Roccato, Stefano /Pagliaro and Silvia Russo. 2016. “From political opponents to enemies? The role of perceived moral distance in the animalistic dehumanization of the political outgroup”. In: Group Processes & Intergroup Relations19 (3): 360–373
Pariser, Eli. 2011. The Filter Bubble: What the Internet Is Hiding from You. London: Penguin
RIAS (Bundesverband RIAS e.V. Bundesverband der Recherche- und Informationsstellen Antisemitismus). 2023. Antisemitische Reaktionen auf den 07. Oktober. Antisemitische Vorfälle in Deutschland im Kontext der Massaker und des Krieges in Israel und Gaza zwischen dem 07. Oktober und 09. November 2023,https://report-antisemitism.de/publications
Saha, Koustuv, Eshwar Chandrasekharan and Munmun De Choudhury. 2019. “Prevalence and psychological effects of hateful speech in online college communities”. In: Proceedings of the 11th ACM Conference on Web Science 2019: 255–264
Schwarz-Friesel, Monika. 2013. “’Juden sind zum Töten da’ (studivz.net, 2008). Hass via Internet—Zugänglichkeit und Verbreitung von Antisemitismen im World Wide Web”. In: Marx, Konstanze and Monika Schwarz-Friesel (eds). Sprache und Kommunikation im technischen Zeitalter. Wieviel Internet (v)erträgt unsere Gesellschaft? Berlin: De Gruyter: 213–236
Schwarz-Friesel, Monika and Jehuda Reinharz. 2017. Inside the Antisemitic Mind: The Language of Jew-Hatred in Contemporary Germany. Waltham, MA: Brandeis University Press
SPCJ. 2023. Les chiffres de l’antisémitisme en France en 2023, https://www.spcj.org/antis%C3%A9mitisme/chiffres-antis%C3%A9mitisme-france-2023-b
Stroud, Natalie J. 2011. Niche News: The Politics of News Choice. Oxford University Press
Troschke, Hagen and Matthias J. Becker. 2019. “Antisemitismus im Internet. Erscheinungsformen, Spezifika, Bekämpfung”. In: Jikeli, Günther and Olaf Glöckner (eds). Das neue Unbehagen. Antisemitismus in Deutschland und Europa heute. Hildesheim: Olms: 151–172
Virtanen, Mikko T. and Liisa Kääntä. 2018. “At the intersection of text and conversation analysis: Analysing asynchronous online written interaction. In: AFinLA-e: Soveltavan Kielitieteen Tutkimuksia11: 137–55
Weitzman, Mark, Robert J. Williams and James Wald (eds). 2023. The Routledge History of Antisemitism. Abingdon: Routledge
Wistrich, Robert. 1992. Antisemitism: The Longest Hatred. New York: Pantheon
Wollebæk, Dag, Rune Karlsen, Kari Steen-Johnsen and Bernard /Enjorlas. 2019. “Anger, fear, and echo chambers: The emotional basis for online behavior”. In: Social Media + Society5 (2), https://doi.org/10.1177/2056305119829859
Zannettou, Savvas, Joel Finkelstein, Barry /Bradlyn and Jeremy /Blackburn. 2020. “A quantitative approach to understanding online antisemitism”. In: Proceedings of the International AAAI Conference on Web and Social Media14 (1): 786–97, https://ojs.aaai.org/index.php/ICWSM/article/view/7343
1 For further information on the project, see https://decoding-antisemitism.eu. The pilot phase was conducted in collaboration with HTW Berlin, University of Michigan’s School of Information, Cardiff’s HateLab and King’s College London.
2 INACH (International Network Against Cyber Hate) is a network of 34 member organisations from 27 EU countries, jointly working to combat the spread of online hate, https://www.inach.net/cyber-hate-definitions/
3 With regard to the usage of small caps, see explanation at the end of this introduction.
4 See also the rise of antisemitism in the context of dismissive attitudes towards science and educational elites in the context of Covid-19.
5 FRA 2022. “Antisemitism online far outweighs official records”, https://fra.europa.eu/en/news/2022/antisemitism-online-far-outweighs-official-records
6 ISD and B’Nai B’rith Internation 2022. “Online antisemitism: a toolkit for civil society”, https://unesdoc.unesco.org/ark:/48223/pf0000381856
7 See European Commission 2024. “Questions and answers on the Digital Services Act”, https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348
Karolina Placzynta
©2024 Karolina Placzynta, CC BY 4.0 https://doi.org/10.11647/OBP.0406.01
Despite the benefits of the intersectional approach to antisemitism studies, it seems to have been given little attention so far. This chapter compares the online reactions to two UK news stories, both centred around the common theme of cultural boycott of Israel in support of the BDS movement, both with a well-known female figure at the centre of media coverage, only one of which identifies as Jewish. In the case of British television presenter Rachel Riley, a person is attacked for being female as well as Jewish, with misogyny compounding the antisemitic commentary. In the case of the Irish writer Sally Rooney, misogynistic discourse is used to strengthen the message countering antisemitism. The contrastive analysis of the two datasets, with references to similar analyses of media stories centred around well-known men, illuminates the relationships between the two forms of hate, revealing that—even where the antisemitic attitudes overlap—misogynistic insults and disempowering or undermining language are being weaponised on both sides of the debate, with additional characterisation of Riley as a “grifter” and Rooney as “naive”.
More research comparing discourses around Jewish and non-Jewish women is needed to ascertain whether this pattern is consistent; meanwhile, the many analogies in the abuse suffered by both groups can perhaps serve a useful purpose: shared struggles can foster understanding needed to then notice the particularised prejudice. By including more than one hate ideology in the research design, intersectionality offers exciting new approaches to studies of antisemitism and, more broadly, of hate speech or discrimination.
Close and systematic monitoring of reactions to news items in the context of antisemitic discourse can over time reveal certain regularities: it can highlight which antisemitic concepts are most widespread within a language community, or point to the most common triggers for the increase in antisemitism levels (Hübscher and Von Mering 2022). In terms of the online comment sections of UK mainstream media, such triggers tend to be news stories focusing on the State of Israel, which spark web-user debates on Israeli politics; genuine and legitimate critique of Israeli government or its policies will then sometimes cross the line into antisemitism (Schwarz-Friesel 2020). Another such type of trigger seems to be media coverage centred around a well-known figure and a statement they have made in relation to Jews or Israel, at times open to interpretation, or otherwise directly and unequivocally antisemitic. Whether they have made their name in the political arena, the arts or the world of show business, the controversy will inevitably attract the attention of both newand existing supporters as well as critics, resulting in a flurry of media reports about their statements and a lively discussion in the comment sections regarding the impact, seriousness and truthfulness of their words.
The framing of the public figure’s pronouncement is likely to affect web-user reactions as well. An accusation of antisemitism in the press articles themselves seems to fuel the debate further, on the one hand prompting affirmation and agreement, on the other a proliferation of counter speech (see Ascone in this volume). This chapter focuses on two case studies in which well-known figures with similar visibility, television presenter Rachel Riley and novelist Sally Rooney, publicly voiced their opinions on issues regarding the cultural boycott of Israel, in both cases triggering a significant amount of coverage by mainstream media in the UK, broadly discussed by web users of the media in the comment sections. The chapter compares the findings in terms of antisemitic hate speech found in the comment sections, but also the misogyny present in both debates. By comparing the two, it hopes to contribute to the conversation on the different hate ideologies co-existing in the same mainstream spaces, and their potential to be weaponised.
Over the past three years, the research team of theDecoding Antisemitism project analysed several discourse events centred around prominent figures and media personalities. These have included the 2021 case of the sociology lecturer Professor David Miller, who had made incendiary statements about the State of Israel,1 as well as the British left-wing politician Diane Abbott and the US musician Ye (formerly known as Kanye West), both of whom have been accused of antisemitism on separate occasions—based on their comments about Jews in, respectively, her letter to the British weekly The Observer, and his social media posts. Outside of the UK, similar news stories in recent years have involved the French comedian Dieudonné M’Bala M’Bala2 and German politician Hans-Georg Maaßen; all these events have provoked lively debates in the comments under the media posts on the topic in the respective countries. Such focus on a recognisable public figure makes the conversation more appealing to both the media and the public opinion, and the figure’s actions provide a specific trigger for the discussions on antisemitism. Antisemitic ideology can then be pinned onto a particular individual rather than discussed in the abstract, allowing the media and the comment boxes to sidestep the difficulty of elucidating the long and rich history of antisemitism, its complexity and illogicality, and its ever-changing guises which often depend on their temporal, geographical or cultural context. It is perhaps easier for the public opinion to focus the discussion instead on one person’s biography and the various aspects of their professional or private identity, using them as arguments or counter-arguments. The public figure is thus collectively dissected, and a narrative is built around them.
Studying such events purely from the point of view of the hallmarks of antisemitism and its specific stereotypes, analogies or strategies undoubtedly helps construct a good overview of the overarching patterns of antisemitic discourse. However, taking into consideration other hate ideologies as well can provide further insights, particularly into the specific abuse suffered by various groups in connection with not just their Jewish identity, but also with their gender, sexual orientation, skin colour, ethnicity, age, disability. In the recent years, several public figures in the UK have been vocal about the particular type of hate speech they have been targets of as Jewish women, including the politicians Luciana Berger, Ruth Smeeth and Margaret Hodge, or actor and writer Tracy-Ann Oberman. On the other hand, looking at more than one hate ideology in the analysis of antisemitic discourse can also show how one can be instrumentalised in the fight against another: many comments countering antisemitism contain misogyny, racism, or anti-Muslim sentiment, which become an unwelcome feature of counter speech and create more and stronger divides instead of educating or fostering understanding. The many comments denouncing Diane Abbott’s letter to The Observer in April of 2023, in which she seemed to relativise and downplay the seriousness of contemporary antisemitism (Scheiber 2024), attacked not just the accuracy of her statement or her professional competence as a politician and a Member of Parliament, but also her race, gender and age.3 Outrage against Kanye West’s antisemitic social media posts and claims made in an interview was at times expressed through the means of anti-Black discourse in comment sections and deriding his mental health diagnosis (Chapelan et al. 2023). In commentary on the ongoing events of the Arab-Israeli conflict, counter speech comments made by web users regularly rehash Islamophobic narratives. In other words, the specific identity (real or perceived) of a person or people at the receiving end of the criticism, even when the actual criticism is due, is unfairly instrumentalised against them in ad hominem attacks. Studying the interactions of the various hate speech ideologies, their possible correlations, and contextual or universal specificities yields a fuller picture of online hate speech.
Despite such clear indications of the benefits of this intersectional approach to the study of antisemitic hate speech, as well as counter speech—an approach which recognises that a person or group can experience discrimination, marginalisation or oppression in a distinct way, depending on the specific aspects of their individual identity (Cho et al. 2013, Thomas et al. 2023)—it seems to have been given little attention so far: “global antisemitism is only rarely included in intersectional theory, and Jews are often excluded from feminist anti-racist social movements that claim to be guided by intersectionality” (Stögner 2020). Its application in the field of antisemitism studies, or more specifically in the study of the structure of antisemitic speech online, could result in new, illuminating and more particularised findings, steering away from dichotomy and towards a more comprehensive and nuanced view of both the antisemitic discourse and counter-antisemitic narratives.
One such pairing of hate ideologies that seem to frequently intersect or interact in online discourses are antisemitism and misogyny. Misogyny—a contemptuous view of women—and sexism, an unequal view of the genders, are extremely widespread and hardly need an introduction; sexist and misogynistic discourses have been amply studied (Vickery and Everbach 2018, Cameron 2020), also in contemporary online spaces (Jane 2014, Ging and Siapera 2018, KhosraviNik and Esposito 2018), sometimes including the specific types of abuse encountered by transwomen or queer women (Jane 2016: 70–71). While men are, of course, also targeted by hate speech or ‘cancelled’ (that is, strongly criticised and ostracised), prominent female figures seem to bear the brunt of more frequent, and more violent, hate speech, including more death or rape threats; increased visibility can arguably increase the amount of hate speech they receive, and a positive public image does not immunise them from public opinion quickly turning on them.4
There is a considerable amount of literature on the specificities of historical gender-based antisemitic prejudice. Both male and female Jews have been presented at various times throughout history as sexually deviant and therefore reprehensible, depraved and abnormal (Drake 2013), feeding into the more general, classic antisemitic stereotypes of monstrosity and repulsiveness, both moral and physical. However, Jewish men have also been portrayed as emasculated and weak (Pellegrini 1997, Schüler-Springorum 2018), and Jewish women as deceitful and witch-like. These stereotypes find their way into later cultural, literary and cinematic tropes which dilute the message and are therefore not immediately recognisable as negative at their root, such as the nineteenth-century “la belle juive”—seductive and tragic (Rindisbacher 2018), the contemporary “nice Jewish boy”—gentle and respectful, the “Jewish American Princess”—somewhat spoilt and materialistic, a play on capitalistic greed (Keiles 2018), and the “Jewish mother”—overbearing and pushy (Ravits 2000, Abrams 2012: 47–48). The latter, present-day tropes often become reflected in pop culture, particularly in the films and television series created in the United States, which sustains them via such acceptable, light-hearted iterations and contributes to spreading them ever further.5
Expressions of gender-based antisemitic stereotypes found in the comment sections of UK mainstream media, especially once the content has been moderated by human or automated moderation, are likely to be similarly watered down and therefore deemed innocuous and inoffensive, or at least palatable. Likewise, the moderation will have removed the most extreme forms of anti-feminism and misogyny, such as pro-rape comments found, for instance, in the discourse of the antisemitic far-right; such discourse is expressed more freely in unmoderated spaces, including group chats on the messaging app Telegram, where it “actively promotes sexual violence as a political weapon” against women as well as the LGBT+ community (Lawrence, Simhony-Philpott and Stone 2021). Nevertheless, even casual expressions of a hate ideology, as inconsequential as they may seem in isolation, have the potential to harm their targets and normalise the prejudice, for both the targets and anyone who comes across them. While very explicit hate speech can alienate a mainstream media reader, regular exposure to casually expressed antisemitism can lead them to, for example, accept outbreaks of violence against Israeli civilians as understandable. By the same token, institutional sexism and misogyny have been cited as an obstacle to investigating rape accusations made by women against men (Casey 2023). Often, the power of antisemitic or misogynistic statements is not in their individual shock value, but in their sheer repetition, accumulation and acceptability; while one comment or image might not radicalise a reader, their continued and persistent presence could lead to the boundary of what is acceptable to say and do moving ever further (Oboler 2021).
In early 2019, mainstream news outlets in the UK reported that the next Eurovision Song Contest would take place in May of that year, in the Israeli city of Tel Aviv. Soon after the announcement, at the end of January, around 50 British artists and celebrities signed an open letter which called on the British Broadcasting Company (BBC) to petition Eurovision organisers (the European Broadcasting Union) to move the event to a different location in order to show their opposition to Israel’s policies and actions in relation to Palestine. The letter stated that “Eurovision may be light entertainment, but it is not exempt from human rights considerations—and we cannot ignore Israel’s systematic violation of Palestinian human rights”, in effect calling for a cultural boycott of Israel (The Guardian 2019); the signatories included fashion designer Vivienne Westwood, actor Maxine Peake and musician Roger Waters. The letter followed on from an earlier, similar campaign organised by the Boycott, Divestment and Sanctions (BDS) movement in September of 2018, which had been supported by numerous artists from across Europe. BDS, a Palestinian-led initiative which aims to put pressure on Israel through encouraging economic, cultural and political measures, is shaped in the image of the anti-apartheid boycott actions aimed at South African policies in the second half of the twentieth century (Barghouti 2011).
The appeal prompted a response from other figures within the UK entertainment, arts and culture industry. In a second open letter, made public in April of 2019, they opposed the boycott arguing that “while we all may have differing opinions on the Israeli-Palestinian conflict and the best path to peace, we all agree that a cultural boycott is not the answer”, and calling Eurovision a “unifying event […] crucial to help bridge our cultural divides and bring people of all backgrounds together” (Creative Community for Peace 2019). Although the second letter was signed by over a hundred members of the industry, most media reports on the topic mentioned only a handful of best-known names in either the article headlines or content. Among these, they frequently included Rachel Riley, a popular television show presenter, who had spoken publicly about antisemitism in the UK, notably in relation to the antisemitism allegations in the Labour party. Riley has also related being a target of antisemitism and misogyny as a Jewish woman on various occasions; taking a stance on the issue of cultural boycott of Israel in the context of a popular entertainment event made her vulnerable to such attacks in the comment sections of mainstream news outlets. Was the discourse used against her different from the attacks on other women? Would a comparison of two case studies—one focusing on Riley, and the other on a non-Jewish woman with similar visibility, who has spoken publicly on a similar topic—reveal parallels or differences?
In an effort to answer these questions, a sample of web-user reactions in the 2019 cultural boycott case have been compared with a similar sample of responses to an event from October 2021, when the best-selling Irish novelist Sally Rooney announced her decision not to grant translation rights to an Israeli-based publishing house for her recently released third novel (BBC News2021). Rooney explained her decision with her support for the BDS movement; her announcement was widely reported by the mainstream media in the UK across the political spectrum, and it prompted many web users to comment on it under the media posts (Ascone et al. 2022). While multiple comments agreed with Rooney’s stance and similarly aligned themselves with the idea of a cultural boycott of Israel or expressed direct approval for BDS, others criticised her decision. Often, the criticism did not stop at her words and extended to the person herself—her supposed political sympathies, for example—and, on occasion, the criticism became a xenophobic attack on her Irish origins, or misogynistic abuse based on her gender.
Despite the fact that the two cases are two and a half years apart, there are significant parallels between them (see Fig. 1.1). Both central figures, Rachel Riley and Sally Rooney, are young white woman that have become famous in the UK by virtue of their professional activity in the British entertainment, arts and culture industry: Rooney as a popular and acclaimed novelist, and Riley as a successful television presenter, and later also an author. At the time of the media reports, they were of a similar age; ageism is often an element of misogynistic or sexist discourses and therefore a potentially relevant factor in this analysis. Both women have used their professional recognition as a platform to make a political statement on a similar issue, albeit on opposing sides. However, out of the two only Riley identifies as Jewish.
The issue on which they have both publicly expressed their views, in the context of this analysis, has been the idea of boycotting the State of Israel through the means of mainstream cultural output, on both occasions in connection with the broader BDS movement. In both instances, the mainstream media coverage of their stance on the issue sparked a lively debate in the comment sections of UK news outlets. In each of the two cases, the basis for analysis was a dataset built of eight online comment threads, taken from the comment sections of a range of UK mainstream media websites and their official social media accounts (Fig. 1.2). Each of these threads was the source of a 200-comment sample, totalling 1,600 user comments per case.6
Riley dataset
Rooney dataset
Common themes in dataset: cultural boycott of Israel, the BDS movement, apartheid analogy
Central figure: popular British television show presenter in her early 30s, white, female, Jewish
Opposing cultural boycott of Israel (as reported in UK media in 2019)
Common themes in dataset: cultural boycott of Israel, the BDS movement, apartheid analogy
Central figure: popular Irish novelist aged 30, white, female, not Jewish
Supporting cultural boycott of Israel (as reported in UK media in 2021)
Figure 1.1: An overview of the case studies.
The methodological framework applied to the two datasets comes from theDecoding Antisemitism project, whose aim is to study the contemporary presence of antisemitic hate speech in the (politically moderate) mainstream in all its forms, including its implicit expressions which, due to their hidden or unfixed nature, evade immediate detection and therefore pass through moderation, with time contributing to the normalisation of antisemitic attitudes online. The project analyses three language communities: the UK, Germany and France, looking for both the universals in their antisemitic discourses online, and their specificities in terms of frequency, triggers and linguistic formats and patterns, bringing into focus the discourse and its potential impact rather than the identity of commenters or the intentionality of their statements.
The analysis presented in this chapter uses the project’s approach to data collection and the same classification system. The online comment threads used to build the two datasets were first systematically collected using a custom crawling tool, based on selected key words and a specific date range, and downloaded in a text format retaining the comment thread structure. The threads were then organised into a corpus balanced in terms of representation of mainstream news outlets and their political alignment. Each of the longest comment threads in the corpus was sampled by selecting the first 200 comments, and manually analysed with two research tools. The first of these was content analysis software MAXQDA, which allows the researcher to annotate textual and visual content. The second instrument was a classification system developed by the research project team based on classic and modern antisemitic concepts—both deductively and inductively, as the initial project analyses revealed further patterns in the examined data. Apart from a detailed and precisely defined conceptual categories and sub-categories, the classification system also allows for the content to be analysed in terms of linguistic structures and devices present in the comment, with categories such as irony, rhetorical questions, wordplay, and more.
While the classification system used in the project makes it possible to analyse the antisemitic content in minute detail, it does not currently reflect misogynistic ideology in the same fine-grained approach. For the purposes of analysing the two datasets, the above-mentioned inductive approach was therefore applied in order to identify the specifics of the misogynistic discourse they contained, referencing existing literature on such discourse. The Sally Rooney corpus had first been analysed by the research team in a report published in October 2022; this dataset was used in part (preserving the balance of sources) and reanalysed from the point of view of misogynistic hate speech for this chapter. Meanwhile, the Rachel Riley dataset has been collected and analysed in terms of both antisemitic and misogynistic content expressly for the purposes of presenting this comparison.
Riley dataset
Rooney dataset
1,600 comments
8 comment threads
Data sources: Facebook pages of The Independent, The Guardian, The Metro, The Spectator, Evening Standard, The Daily Mail.
1,600 comments
8 comment threads
Data sources: Facebook pages of The Independent, The Guardian, The Times, The Spectator, The Telegraph and The Daily Mailwebsite.
Figure 1.2: An overview of the datasets.
The in-depth empirical analysis of the two datasets has uncovered many similarities, not least in the level of antisemitic comments they contain, as well as the types of stereotypes, analogies and strategies used by the commenters to convey antisemitic attitudes. The average share of antisemitic comments, both explicit and implicit, reached just over 11% in both corpora—a finding not dissimilar to the typical percentage revealed in regular analyses of similar datasets in the Decoding Antisemitismproject. The antisemitic comments typically revolved around the same themes and triggers; that is, Israeli politics in the context of the Middle East conflict, including frequent comparisons of Israel to an apartheid state, and support for the BDS movement.
Riley dataset
Rooney dataset
(1) Boycott the Fcuking <a></a>Izrahells so that they learn they are not gods chosen. (SPECT-FB[20190506])
(2) […] WE NEED TO BOYCOTT ISRAELI GOODS, CULTURAL EVENTS ETC PLEASE BOYCOTT TO PUT PRESSURE ON THE RACIST STATE OF ISRAEL (INDEP-FB[20190430])
(3) do your own research. I’m defending her decision to support the boycott. (TIMES-FB[20211012])
(4) She should boycott Hebrew altogether. Modern Hebrew was invented as part of the Zionist project. (TIMES[20211012])
In both datasets, many web users took to comment sections simply to express respect, support or admiration for the cultural boycott of Israel, and often calling for others to do the same, as in (1), (2) and (4). While some comments, such as (3), simply affirmed the antisemitism (in 9% of antisemitic comments in the Riley dataset and 8% in the Rooney dataset), the support was often accompanied by, or argued through, the attribution of further problematic concepts to Israel. In (1), the commenter hinted at two such antisemitic stereotypes: first, the idea of supposedly evil Jewish nature is expressed in the pun “Izrahells” (Bolton 2024b); second, the reference to the “chosen one” trope signals the commenter’s disapproval for the alleged privilege enjoyed by the Jewish state (Placzynta 2024b). In (2),