• Psychopolitics of AI: From Fetishism to Brutalism

    This text was originally published in the Journal of Sublation Magazine and re-published in Italian language by MagIA.
    (August 2024)

    Fetishism

    In his book “The Sublime Object of Ideology”, philosopher Slavoj Žižek unveils two distinct and seemingly incompatible modes of fetishism: commodity fetishism, which occurs in capitalist societies, and a fetishism of ‘relations between men,’ characteristic of pre-capitalist societies. Žižek’s analysis can be extended to incorporate a third—and most recent—mode of fetishism, namely, AI fetishism. We can locate AI fetishism as an ultimate symptom of Achievement Society, which, according to philosopher Byung-Chul Han, is ‘a society of fitness studios, office towers, banks, airports, shopping malls, and genetic laboratories.’ 


    Everything, everywhere, all at once—the achievement society found itself in the midst of technological turbulence. A turbulence that leads to nausea for some and to esoterism for others. AI is everywhere: from smart home devices to algorithm-driven content recommendation systems; from biometric security measures to online shopping platforms; from finance, healthcare, public governance and mental health. (And, even in love too). 

    AI is libidinal technology. It has mobilised all the energies—emotions, desires, passions, affections—of corporate class, mainstream media, international institutions and governments. (And eugenics too). As a libidinal technology, we are told, AI will solve some of the world’s most pressing problems, create new inventions for humanity, and accelerate the economy, productivity, and well-being. It has evaporated some of the world’s harshest and complex problems and melted down into libidinal desiring-technology. Tellingly, AI produces a technological gaze, transforming human subjects into data points and algorithmic predictions, stripping away the complexity of human experiences and subjectivities, and reducing individuals to mere objects within a technological framework.

    Photo courtesy of Ottavio Tomasini.

    AI is a regime of perversion. For Lacan, what defines a pervert is not merely their actions but their occupation of a particular structural position in relation to the Other. The relationship of the governing and corporate classes with AI is perverse. Despite the risks, damages, and proven harms, AI continues to be promoted as inevitable. The narrative of inevitability is part of the ‘mechanism of disavowal’—a reaction to castration (recognised and ignored at the same time). This is exemplified in a tweet by  European Commissioner, Therry Breton, who, while celebrating the approval of the EU AI Act, wrote: “We are regulating as little as possible — but as much as needed!” Isn’t this tweet an embodiment of the disavowal mechanism? In other words, we want to regulate AI as much as needed (so to say to ‘castrate it’), but as little as possible (hence the ‘disavowal’ as a reaction to castration). AI represents a technological fetishism. Similarly to how a fetish serves as a substitute for the maternal phallus following the boy’s perception of the female genital, as described by Freud, AI today serves as a substitute—in the form of a technological fix—for problems that are structurally political. Artificial neural networks (ANNs), one of the most dynamic aspects of AI and which power many of the AI systems today, are devoid of usability without human labour. They remain an infinite layer of interconnected nodes, or ‘neurons’, carrying empty weights in search of purpose. It is through human labour — sometimes provided for free, sometimes exploited in the peripheries of capital, or simply captured/stolen by private tech companies — involving data and knowledge production, labelling, annotating, training, and cleaning, that artificial neural networks (ANNs) gain their use-value. As the network learns from data during the training process, the neurons’ weights adjust, transforming the network into a tool of material use. The Silicon Valley lords have finally found their fetish, which they want to sell to everyone as a magical and supernatural power. AI – and particularly the competition to build the Artificial General Intelligence (AGI), a transhuman intelligence – has become the ulta-expensive fetish-toy and ritual pursued by the world’s richest people. This is evidenced by the recent reports stating that OpenAI’s chief scientist, Ilya Sutskever, has established itself as an esoteric “spiritual leader” cheering on the company’s efforts to realise AGI, collectively chanting “Feel AGI! Feel AGI!. While the new AI sect of Silicon Valley reaps the fruits of its fetish, this technological fetishism was built upon theft and exploitation.

    Theftploitation

    AI is built upon theft and exploitation. The convergence of theft and exploitation into the techno-political apparatus of AI is what I call theftploitation. Theftploitation represents the new, invisible mechanisms upon which AI is developed and maintained. All forms of exploitation that once were visible and solid in the disciplinary society have now dissolved into mathematical optimization and statistical correlations. 

    Let’s consider for a moment how the paradigmatic dataset known as ImageNet,  which has enabled some of the best results in the development of ANNs, was built. Comprising more than 14 million labelled images, each tagged and categorised into more than 20,000 groups, ImageNet’s creation was made possible by the efforts of thousands of anonymous workers recruited through Amazon’s Mechanical Turk platform. These invisible workers, who made ImageNet possible, were paid for each task they completed, sometimes receiving as little as a few cents.  It is in the undemocratic relationship of domination between Center and Periphery, which dates back to the colonial and slavery periods, that we can locate the first mechanism of exploitation.  With AI, plantations have dissolved into data-centres, slaves into ghost workers, masters into invisible algorithmic managers. All that was solid – capital, exploitation, and domination – has melted into the opaque techno-political apparatus of AI. 

    Photo courtesy of Ottavio Tomasini.

    Many of the AI-powered services that we enjoy daily in the West—such as Google Maps, Airbnb, TripAdvisor, Uber, etc.—are maintained by the labour of thousands of invisible workers who are exploited on a daily basis. For most ordinary people, these services appear as simple conveniences, devoid of any underlying discord or exploitation. Here, we are confronted with what Lacan calls the Imaginary order: a world perceived as one of ease and connectivity, efficiency and productivity, mirroring a specular image of self-satisfaction that is devoid of the Real. Thus, in our daily interactions with these services, we engage not with the complex and often disturbing socio-economic realities—the Real—but with an Imaginary and the Symbolic orders that constructs the structure of our psyche in a manageable way; in other words, making the consumption of services feel effortless and friendly. However, according to Lacan, the Real must be apprehended in its experience of rupture, between perception and consciousness. The Real thus appears as a force of amorphous disruption which horrifies and disturbs, brutalises and overwhelms. But, instead of being perceived as a (political, economical) rupture, contemporary capitalism has facilitated the depoliticization of the public sphere and the creation of a post-political subject—a subject who, in Lacanian sense of jouissance (a deadly excess over pleasure), enjoys mobility and flexibility under the banner of ‘digital nomad’, ‘smart working’, and ‘flexible work’. This new class of workers, what Franco Berardi called the Cognitariat, is no longer disciplined by the Eye of the Master but is instead self-disciplined; it doesn’t suffer from physical exhaustion, but from mental burnout; and old-fashioned collective subjectivity is being replaced by hedonistic individuality’s ‘multitasking’. We can thus argue that the second type of exploitation is the self-exploitation of the achievement-subject who, according to Byong Chul-Han, ‘deems itself free, but in reality it is a slave’. Despite there being no master forcing the achievement-subject to work, writes Han, he or she exploits themselves without a master. The jouissance of the cognitariat is nothing else but what Lacan described as ‘perverted surplus-enjoyment’. Samo Tomšić’s interpretation of Marx and Lacan is remarkable in articulating the psyche’s mechanisms at play within capitalism, where,  ‘the capitalist regime demands from everyone to become ideal masochists and the actual message of the superego’s induction is: ‘Enjoy your suffering, enjoy capitalism’.Finally, we come now to the third, and the last, mechanism of exploitation which takes the form of theft. In the PR debacle, OpenAI’s CTO, Mira Murati, was caught short of an argument when asked by a journalist which data was used to train Sora, OpenAI’s text-to-video generator. Initially, she responded by stating that they had used ‘publicly available data and licensed data.’ However, when the journalist further inquired whether they had used YouTube videos, Murati’s faltering response was accompanied by: ‘I am not actually sure about that.’ It has now become an open secret that tech giants have altered their own rules to train their AI systems by exploiting, capturing, appropriating and monetizing what Marx called the ‘general intellect’—the collective knowledge produced and published online in all its forms. Philosopher Slavoj Žižek has warned over a decade ago that:

    the possibility of the privatisation of the general intellect was something Marx never envisaged in his writings about capitalism […]. Yet this is at the core of today’s struggles over intellectual property: as the role of the general intellect – based on collective knowledge and social co-operation – increases in post-industrial capitalism, so wealth accumulates out of all proportion to the labour expended in its production. The result is not, as Marx seems to have expected, the self-dissolution of capitalism, but the gradual transformation of the profit generated by the exploitation of labour into rent appropriated through the privatisation of knowledge.

    AI represents a theft of global proportions. It has captured the creative work of artists, musicians, photographers, and the intellectual work of journalists, academics and scientists—not to mention the millions of volunteer hours spent worldwide by people contributing to platforms like Wikipedia—now presents itself as the future of humanity, albeit with a hefty price tag. Significantly, AI’s reach extends to natural resources as well. A recent publication by Estampa, a collective of programmers, filmmakers and researchers, illustrates how generative AI perpetuates the colonial matrix of power. It steals knowledge, exploits underpaid labour, and extracts natural resources such as land and water to sustain data centres, as well as raw materials required for the production of semiconductor chips, which are essential to enhance computing power and accelerate machine learning workloads. 

    Despite the large number of lawsuits against the major tech companies that have launched AI products, and the emergence of new forms of resistance such as data centre activism, there remains an open question: Will the AI industry be perceived as ‘too big to fail,’ similar to how banks were viewed during the 2008 financial crisis, and thus be preserved by existing judicial-political structures? If that is the case, who will pay the heaviest price? 

    We are at a pivotal moment where social, political, and economic arrangements are poised to enter into what Žižek called a ‘masochist contract’ with the big tech companies that own some of the main AI technologies and are taking over key public sectors such as education, healthcare, public administration, and so on. In the masochist contract, Žižek writes, it is not the Master-capitalist who pays the worker (to extract surplus value from him or her), but rather the victim who pays the Master-capitalist (in order for the Master to stage the performance which produces surplus-enjoyment in the victim). Consequently, we are going to witness the intensification of the politics of precarisation and austerity, exclusion and eviction, separation and marginalisation. We are going to see politics of brutalism administered by AI on behalf of two political rationalities: the radical centre and the far-right.

    Brutalism

    Brutalism, in architectural thought and practice, is intended as a style characterised by raw, cold, exposed concrete and bold geometric forms, often attributed to the French word ‘béton brut,’ meaning raw concrete. Brutalism perhaps represents what architect Alejandro Zaera Polo defines as the physicalization of the political. Emerging in the post-World War II period and with a strong positionality in Eastern Europe, brutalist architecture has been associated with socialist utopian ideas. 

    Invoking political aesthetics of brutalist architecture, philosopher and social critic Achille Mbembe, situates brutalism ‘at the point of juncture of materials, the immaterial, and corporeality’. For Mbembe, brutalism is ‘the process through which power as a geomorphic force is constituted, expressed, reconfigured, and reproduced through acts of fracturing and fissuring’, and its ultimate project is to ‘transform humanity into matter and energy’.

    AI represents the ultimate form of the brutalist project. AI’s brutalism depends on three elements identified by Mbembe: materials, immaterial, and corporeal. AI’s dependence on materiality primarily revolves around the physical infrastructure required to develop, power, and maintain computation power for AI systems, including raw materials for hardware  such as central, graphical, neural, and tensile processing units. Cloud infrastructure, as immaterial as it may sound, it’s not only material, but is also an ecological force with a greater carbon footprint than the airline industry. All that is solid, immaterial, and corporeal is melted into data-centres—these gigantic techno-architectural sites where physicalization of immateriality and corporeality is processed, and where social, political and economical relations are automatised. The immaterial aspects of AI are also crucial for its functioning. Some of the primary elements of human existence—our preferences, behaviours, interactions, desires, needs—are captured and used to feed ANNs. In return, we are presented with speculative offers for ‘personalised services’, in line with political rationality of neoliberalism; and with ‘predictions’, in line with political rationality of eugenics and far-right. As far as corporeality is concerned, AI today serves as a dispostif of power, to which both political rationalities–the radical centre and the radical right–are delegating their own dirty jobs. 

    A recent investigation by Israeli outlet +971 exemplifies AI brutalism—interactions among sufferers are classified as potential suspects with devastating material and corporeal effects. As one source stated in the investigation, ‘human personnel devote only about “20 seconds” to each target before authorising a bombing’.  To put it in more Adornian terms, AI brutalism is the new administration method for violence, expulsion and eviction of the world’s most vulnerable groups. Even the newly-approved and highly-praised EU AI Act has (been criticised by human rights organisations for failing to properly ban some of the most dangerous uses of AI, including systems that enable biometric mass surveillance and predictive policing systems; as well as creating a separate regime for people migrating, seeking refuge, and/or living undocumented. 

    Photo courtesy of Ottavio Tomasini.

    AI is the most contemporary form of this brutalism which, according to Mbembe, ‘does not function without the political economy of bodies’. Fanon, on the other hand, insightfully observes that ‘the economic substructure is also the superstructure; you are rich because you are white, you are white because you are rich.’ Therefore, racism hence is never casual, reminds us Fanon, and all forms of racism are underpinned by a structure. This structure serves what Fanon called a gigantic work of economic and biological subjugation (one can notice how we are again dealing with materiality and corporeality). A Fanonian reading of the EU AI Act suggests that AI obfuscates this structure by making it invisible, while simultaneously subjecting migrants and refugees to violent biopolitical tattooing practices such as: electronic fingerprint and retinal scanning, biometric voice recognition, AI-powered lie detector, etc.. AI thus becomes the Master-administrator of the borders as ‘ontological devices, namely, it now functions by itself and in itself, anonymously and impersonally, with its own laws’ (Mbembe 2024). 

    The borderization of Europe is not only marked by brutalist concrete walls and brutalist technologies such as AI; its core ideological tenets lie in the political rationalities of the radical centre and far-right. The first seeks bodies as “excesses” for extraction and monetisation (i.e. a form of surplus-life), while the latter sees them as “excesses” for expulsion and eviction (UK’s policy to organise mass deportation of asylum seekers in Rwanda, and Italy’s deal to outsource refugees in Albania are two most recent examples). The first envisages an AI-administered global technocratic regime, akin to the one echoed by Tony Blaire Institute, whereas the latter has already advanced in its hunt for biometrics. The first is the cause; the second is the consequence. We can think of the radical centre and far-right in a similar fashion to how generative adversarial networks (GANs)—a variant of reinforcement learning—function: one network attempts to fool another into believing that the data it generates actually originates from the training dataset. This approach has been used, for example, to create synthetic images of artworks and human faces that an image recognition system cannot distinguish from real images. Similarly, one political rationality tries to fool the other into believing that the politics and policies it generates reflects the genuine demands and needs of the populace. What in reality we get from both political rationalities are indistinguishably brutalist forms of techno-politics. 

    In the face of this dose of pessimism and deadlock, one should reorient oneself in line with Fanon’s concluding sentence in his essay “Concerning Violence”: ‘The European people must first decide to wake up and shake themselves, use their brains, and stop playing the stupid game of the Sleeping Beauty.’


  • Artificial Intelligence is the Other of human

    This text was originally published in the Journal of European Alternatives
    (April 2024)

    Artificial Intelligence (AI) is everywhere: from autonomous vehicles to digital assistants and chatbots; from facial/voice/emotion recognition technologies to social media; from banking, healthcare, and education to public services. (And, even in love too). 

    However, AI doesn’t come from nowhere. We are living in a moment of polycrisis. New wars, conflicts and military coups are emerging on almost every continent in the world with a quarter of humanity involved in 55 global conflicts, as stated recently by United Nations human rights chief, Volker Turk. The escalation and increase in natural disasters caused by climate change has set 2023 as the warmest year on record. The Covid-19 pandemic led to a severe global recession, the effects of which are still being felt today, especially affecting poorer social classes. Finally, we are witnessing the rise of far-right politics, which is increasingly taking control of governments in Europe and beyond.

    The times we are living in are post-normal times. Ziuddin Sardar, a British-Pakistani scholar who developed the concept, describes it as an “[…] in-between period where old orthodoxies are dying, new ones have yet to be born, and very few things seem to make sense.” According to Sardar, post-normal times are marked by ‘chaos, complexity and contradictions’. 

    It is precisely here that we should locate the mainstreaming of AI, both in public discourse and in practical implementation in everyday life. Following the post-2008 global financial crisis, we observed the rise of a ‘new spirit of capitalism’ characterised by a ‘regime of austerity.’ Presently, AI embodies the new knowledge regime that intensifies and amplifies the effects of austerity policies, while being obfuscated and presented as ‘neutral science’. On one hand, the private sector praises AI for increasing efficiency, objectivity, personalization of services, and reducing bias and discrimination. On the other hand, public institutions in general – and public administrations (the bureaucracy) in particular – are, more and more, being attracted to AI technologies and their promise of optimisations that make it possible to do more with less resources. 

    However, what AI promises, as we will see next, is an oversimplified vision of society reduced in statistical numbers and pattern recognitions. AI is thus the ultimate symptom of the ‘Achievement Society’.

    AI as a new regime of Achievement Society

    In his book “The Burnout Society”, philosopher Byong Chul-Han developed the concept of the ‘achievement society’. He writes: “Today’s society is no longer Foucault’s disciplinary world of hospitals, madhouses, prisons, barracks, and factories. It has long been replaced by another regime, namely a society of fitness studios, office towers, banks, airports, shopping malls, and genetic laboratories. Twenty-first-century society is no longer a disciplinary society, but rather an achievement society.” 

    While disciplinary society, continues Han, was inhabited by ‘obedience-subjects’, achievement society is inhabited by ‘achievement-subjects’. We can take this further by stating that in the disciplinary society, those who did not obey were deemed ‘disobedient’; in the achievement society, those who do not conform to its norms are labelled as ‘losers’. Ultimately, the ideological imperative, according to Han, that guides the achievement society is the unlimited Can. 

    The ideological imperative of unlimited Can lies at the core of the AI regime. How so? Firstly, it relates to AI’s insatiable need for data. AI technologies require vast amounts of data to train their models. But, not any data is good data. The data must be collected, categorised, labelled, ranked, and, in some instances, scored. This is exemplified by the American personal data collection company Acxiom, which gathers data on a consumer behaviour, marital status, job, hobbies, living standards and income. Acxiom divides people into 70 categories based on economic parameters alone. The group deemed to have the least customer worth is labelled ‘waste’. Other categories have similar pejorative labels, such as “Mid-Life Strugglers: Families”, “Tough Start: Young Single Parent”, “Fragile Families”, and so on; a division between those who have ‘made it’ and the ‘losers.’ We will get to this problematic relationship later on. 

    Secondly, it pertains to human labour. The amount of human labour invested to develop and train AI technologies is also vast. Take, for example, one of the first deep learning dataset known as ImageNet, which consists of more than 14 million labelled images, each of which is tagged, belonging to more than 20,000 categories.  This was made possible by the efforts of thousands of anonymous workers who were recruited through Amazon’s Mechanical Turk platform. This platform gave rise to ‘crowdwork’: the practice of dividing large volumes of time-consuming tasks into smaller ones that can be quickly completed by millions of people worldwide.  Crowdworkers who made ImageNet possible received payment for each task they finished, which sometimes was as little as a few cents

    Similarly, to reduce the toxicity of AI chatbots like ChatGPT, OpenAI, the company who owns ChatGPT, outsourced Kenyan labourers earning less than $2 per hour. Beyond the precarious working conditions of the workers and their ill-paid jobs, the worker’s mental health was at stake. As one worker tasked with reading and labelling text for OpenAI, told TIME’s investigations, he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture; […] You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture,” he said. In addition to labour exploitation, people often unwittingly provide their data for free. Taking a picture, uploading it on Instagram, adding several hashtags and a description—this is unpaid labour that benefits Big Tech companies. Subsequently, this free labour is appropriated by these companies to train their AI technologies, as evidenced by Meta’s recent use of 1.1 billion Instagram and Facebook photos to train its new AI image generator. 

    The new regime of AI is ‘smart power’. Its power relies on new forms of pervasive surveillance, data and labour extractivism, and neocolonialism. As Han writes: “The greater power is, the more quietly it works. It just happens: it has no need to draw attention to itself.” The ‘smart power’ of AI is violent. Its violence relies on the ‘predatory character’ of AI technologies. They are camouflaged with mathematical operations, statistical reasoning, and excessive un-explainability known as ‘opacity’. 

    Han’s concept of ‘smart power’ helps us tear down the wall that makes AI’s power invisible, and at the same time violent. Isn’t the signifier ‘smart’ the guiding marketing slogan for mainstreaming AI technologies in society? I am referring here to notions such as ‘smart cities’, ‘smart homes’, ‘smart cars’, ‘smart-phones’, etc. The smarter a thing is/has become, the more pervasive the surveillance and data collection will be. Back in 2015, Barbie launched a smart doll who could have conversations with children. It ended up as Barbie’s data privacy scandal. Smartness therefore operates as an obfuscation for further control. 

    Besides smartness, we have another signifier that contributes to the invisibility of power in AI technologies, that is, ‘freedom’.  For Han, neoliberalism’s new technologies of power escape all visibility and they are presented with a “friendly appearance [that] stimulates and seduces” as well as “constantly calling on us to confide, share and participate: to communicate our opinions, needs, wishes and preferences – to tell about our lives.” Han’s critique of freedom reveals our inability to express the ways in which we are not free. That is, we are free in our unfreedom; unfreedom precedes our freedom. 
    A recent multi-year, ethnographic study titled “On Algorithmic Discrimination” examines how “algorithmic wage discrimination […] is made possible through the ubiquitous, invisible collection of data extracted from labour and the creation of massive datasets on workers.” The study, which focuses in the United States of America, highlights the predatory nature of AI technologies, concluding that: “as a predatory practice enabled by informational capitalism, algorithmic wage discrimination profits from the cruelty of hope: appealing to the desire to be free from both poverty and from employer control […], while simultaneously ensnaring workers in structures of work that offer neither security nor stability.” AI as the new regime of achievement society should come with a warning label: AI is the Other of human.

    AI is the Other of human

    The proposition ‘AI is the Other of human’ represents a serious denunciation of the new AI regime. Its intensity is comparable to Mladen Dolar’s—whose proposition I have reappropriated—accusation of fascism in his essay “Who is the Victim?”. He writes, “Fascism is the Other of the political; even more, it is the Other of the human.” 

    Under fascism and Nazism – and for that matter under slavery as well – the struggle for recognition between the Other and the Same has introduced a dissymmetry which amounts to saying that there is one more human than the other. That is to say, I (the Same) am superior to him/her (the Other). This dissymmetry, in order to function, had to be organised, planned and coordinated, as part of the process of othering. 

    For example, Cesare Lombroso, a surgeon and scientist, employed meticulous measurements of physical characteristics he believed signalled mental instability, such as “asymmetric faces”. According to Lombroso, criminals were inherently so, arguing that criminal tendencies were genetic and came with identifiable physical markers, measurable using tools like callipers and craniographs. This theory conveniently supported his preconceived notion of the racial inferiority of southern Italians compared to their northern counterparts. 

    Theories of eugenics shaped many persecutory policies, including institutionalisation of race science, in Nazi Germany. During the Nazi regime, the process of othering was a mixture of different mechanisms. All those who were considered the inferior Other (Jews, people with disabilities, Roma people, gay and lesbians, Communists, etc..) were labelled using a classification system of badges. This was combined with technology, when Thomas Watson’s IBM and its German subsidiary Dehomag were enthusiastically furnishing the Nazis with Hollerith punch card technology which made possible the identification, marginalisation, and attempted annihilation of the Jewish people in the Third Reich, and facilitated the tragic events of the Holocaust. 

    There are, however, entanglements between eugenics and the current AI field. A 2016 paper by Xiaolin Wu and Xi Zhang “Automated Inference on Criminality Using Face Images” claimed that machine learning techniques can predict the likelihood that a person is a convicted criminal with nearly 90% accuracy using nothing but a driver’s licence-style face photo. The study suggests that certain facial characteristics can be indicative of criminal tendencies, suggesting that AI can be trained to distinguish between the faces of criminals and non-criminals. 
    This is one example of validation of physiognomy and eugenics and its entanglements with the AI field. But research papers like this do not exist in a vacuum. They have practical implications and tangible effects in people’s lives. In the same year, ProPublica published an investigation that revealed machine bias in predicting the likelihood of two defendants (one black woman and one white man) committing future crimes. It showed higher risk scores for the black defendants despite her having fewer criminal records compared to the white ones, who had previously served five years in prison.

    Axis of Exclusion

    Differently from Fascism and Nazism, AI today doesn’t create visible dissymmetries between the Same and the Other.  AI’s ‘smart power’ creates new invisible (a)symmetries—a mechanism of separation, segregation, inferiority and violence. This mechanism operates as the Axis of Exclusion, creating three (a)symmetrical lines that deepen society’s asymmetries, that is, reproducing relationships of domination, exploitation, discrimination and enslavement.

    First, AI optimizations, including error minimization techniques like ‘backpropagation’ used in training neural networks, can indeed categorise, separate, label, and score people or neighbourhoods. However, this process ignores nuanced realities of people’s lived experiences and is unable to capture more complex relationships in society. This leads to increased essentialization of people and segregation, creating power relationships of ‘Superiority’ and ‘Inferiority’. This is exemplified in a recent study conducted by a group of researchers, which evaluated four large language models (Bard, ChatGPT, Claude, GPT-4), which are trained  using backpropagation as part of the broader training process, with nine different questions that were interrogated five times each with a total of 45 responses per model. According to the study, “all models had examples of perpetuating race-based medicine in their responses [and] models were not always consistent in their responses when asked the same question repeatedly.” All of the models tested, including those from OpenAI, Anthropic, and Google, showed obsolete racial stereotypes in medicine. GPT-4, for example, claimed that the normal value of lung function for black people is 10-15% lower than that of white people, which is false, reflecting the (mis)use of race-based medicine. 

    The second line of (a)symmetry aims at the intensification of a relationship between ‘the Centre’ and ‘the Periphery’. Drawing on Badiou’s interpretation of Hegel’s master-slave dialectic, I propose that the Centre tilts in the side of enjoyment, whereas the Periphery in the side of labour. The intensification of such relationships also occurs due to the inherent fragility of AI systems. Specifically, when these systems operate beyond the narrow domains they have been trained on, they often become fragile. 

    Here is a scenario from Gray’s and Suri’s book “Ghost Work”. An Uber rideshare driver named Sam changes his appearance by shaving his beard for his girlfriend’s birthday. Now the selfie he took – part of Uber’s Real-Time ID Check to authenticate drivers – doesn’t match his photo ID on record. This discrepancy triggers a red flag in Uber’s automated verification process, risking the suspension of his account. Meanwhile, a passenger named Emily is waiting for Sam to arrive. However, neither Sam nor Emily is aware that a woman in India, Ayesha, halfway across the world, must rapidly verify if the clean-shaven Sam is the same person as the one in the ID. Ayesha’s decision, unbeknownst to Sam, prevents the suspension of his account and enables him to continue with picking up Emily, all within a matter of seconds. 

    The relationship of exploitation between the Centre and the Periphery isn’t limited to transnational contexts; it also manifests within countries and cities, sometimes with tragic outcomes. An example of this is the incident involving an Uber self-driving car, which fatally struck Elaine Herzberg as she was crossing a road, pushing her bicycle laden with shopping bags. The vehicle’s automation system did not recognize Herzberg as a pedestrian, resulting in its failure to activate the brakes. As AI technologies are being mainstreamed in different sectors, we should seriously ask, who will bear the cost of their vulnerabilities and who is actually benefiting from them?

    Lastly, AI’s ability to process vast amounts of data quickly allows for the scaling of these asymmetries across society and countries. Social media algorithms can create echo chambers that amplify and reinforce divisive narratives, deepening the ‘Us’ versus ‘They/Them’ culture. This scaling can lead to widespread societal polarisation and can be used to perpetuate narratives that justify domination, exploitation, discrimination, and even enslavement by dehumanising the ‘out-group’.  

    A 2016 empirical study revealed that an algorithm used for delivering STEM job ads on social media displayed the ads to significantly more men than women, despite an intention for gender neutrality. The ad was tested in 191 countries across the world. The study shows, empirically, how the ad was shown to over 20% more men than women. The difference was particularly pronounced for individuals in the age range 25-54. The study ruled out consumer behaviour and availability on the platform as causes, suggesting the discrepancy might be due to the economics of ad delivery. Female attention online was found to be more costly, likely because women are influential in ‘household purchasing’, reproducing those patriarchal worldviews and cultures on a global scale. This finding was consistent across multiple online advertising platforms, indicating a systemic issue within the digital advertising ecosystem. 

    Another report in 2018 has shown how HireVue, a largely-used company offering AI-powered video interviewing systems, claims their “scientifically validated” algorithms can select a successful employee by examining facial movements and voice from applicants’ self-filmed, smartphone videos. This method massively discriminates against many people with disabilities that significantly affect facial expression and voice: disabilities such as deafness, blindness, speech disorders, and surviving a stroke. 

    Reproduction of ‘in-groups’ (‘Us’) as a ‘norm’ and ‘out-groups’ (‘They/Them’) as an ‘non-norm’ demonstrates yet again the empirical harms and wide-scale segregations that AI technologies are causing in society, targeting mainly minoritized groups such as: black and brown people; poor people and communities; women; LGBTQI+; people with disabilities; migrants, refugees and asylum seekers; young people, adolescents and children; and other minority groups.

    The Axis of Exclusion operates in all domains of life, transversally and transnationally—within a city, between states, across genders, in economy, employment, and many other areas. While the lines of (a)symmetry can stand alone, they also overlap with each other. For instance, a black, disabled, unemployed woman can be the target of multiple discriminations; the axis of exclusion thus targets her at numerous levels. 

    If the final outcome of AI deployment in the real world is greater injustice, inequality, and marginalisation, on one side, versus ‘more efficiency with less resources’, on the other, the differences between the two sides—injustice, inequality, marginalisation vs. more efficiency with less resources—shall be calculated as democratic and human rights deficiencies, and the most vulnerable people shall pay the highest price.