I was interviewed by Diyar Saraçoğlu, translator of 'Yapay Zekâya Direnmek' (the Turkish edition of ‘Resisting AI’), for their series on The Political Construction of Artificial Intelligence, which gave me an opportunity to add some reflections on the current moment.
Can artificial intelligence be seen not only as a technical development but also as an ideological project? How do you view the fundamental issues of AI from a historical perspective?
I don't think any development is 'only' technical. All technology is embedded in history, politics and our social imaginaries.
It's more helpful to think in terms of technopolitics, where the technical and political dimensions are intertwined like strands of DNA.
AI in particular is an apparatus, a configuration of concepts, investments, policies, institutions and subjectivities that act in concert to produce a certain kind of end result.
In the case of AI, the historical currents it's channelling include eugenics and white supremacy.
As you discussed in your book Resisting AI, are there parallels or intersections between the historical development of artificial intelligence and fascism?
The most direct connection is in the idea of measurable intelligence.
In the book I discuss the way AI can trace its history back to the eugenics of Victorians like Francis Galton and Karl Pearson.
The maths that Pearson developed for eugenics, linear regression, is at the root of much of machine learning.
More importantly, their work led directly to ideas about IQ as a justification for racial superiority, and to the idea of 'general intelligence' which became the 'G' in AGI (artificial general intelligence).
These ideological practices directly connect the US race laws of the 1920s to the Nazi regime.
In other words, the belief in AGI which led directly to the founding of companies like OpenAI and DeepMind is at its heart a belief in racial and biological supremacy, and we shouldn't be surprised when we see the ever more obvious overlap between Silicon Valley and forms of fascistic politics.
You argue that artificial intelligence can lead to algorithmic authoritarianism by exacerbating existing inequalities and injustices. How does this process work, and through which mechanisms do you think this trend can be reversed?
This is a general tendency in the combination of machine learning and bureaucracy, because the technology increases the distance between institutional decisions and lived experience.
AI's predictions are reductive abstractions whose opacity removes the possibility of due process.
This becomes what Hannah Arendt called 'thoughtlessness', by which she meant the lack of questioning that enables bureaucrats to function in ways that cause violent harm to vulnerable people.
AI is presented as a 'solution' to structural problems that already prevent people from meeting their fundamental needs, but it actually makes things worse.
The authoritarianism is present in these systems even before oligarchs like Trump and Musk come to power.
The ways to reverse it are to dismantle the ordering mechanisms of the so-called liberal state and replace them with more federated, horizontal and directly accountable systems.
You state that AI's techno-social nature reflects the shaping of technical components by social processes and vice versa. How do you assess the reflections of this interaction in AI applications?
AI is a direct reflection of a neoliberal social vision in that it depends on dividing us, not even just to individuals but to what Deleuze called 'dividuals' (chunks of data), and then using market-like mechanisms of algorithmic speculation to optimise the outputs.
In turn, the spread of AI increases the automatisation of social interactions in every part of our lives and continues to transform social experience into the enactment of different algorithms.
Considering the central role of data in today's capitalist production relations and AI's dependency on big data, how does this data dependency legitimize the domination AI establishes over individuals and communities? Is it possible to develop alternative data governance models?
I would say that it's actually AI that legitimises the data extractivism we experience as workers and citizens.
AI is dependent on ever-increasing volumes of data, so any kind of data protection or privacy is going to be sacrificed in the name of 'AI supremacy'.
There are plenty of examples of people working on ideas for more democratic or community-led data governance, but what I'd like to ask is how much of that data we need in the first place?
I'm not arguing against the usefulness of data in specific situations, but I am saying that the paradigm of datafication and data-led solutions has got out of hand and distracts us from alternative ways of dealing with things.
It's clear that data-driven and predictive methods simply don't work in many social contexts.
Before we get to democratic data governance in any situation we should ask, do we even need to turn this situation into data?
Are we just 'seeing like a State', and if so what are the alternative ways to assess the situation and make decisions about what kinds of intervention are needed?
In the development of Large Language Models (LLMs), how does the labor of low-paid workers involved in tasks like data labeling become obscured within the technology’s production process? What do these hidden labor relations reveal about the political construction of AI?
The labour of low-paid workers has been made deliberately invisible since the start of the current wave of AI, which began with the ImageNet dataset.
It was obscured by using crowdsourced virtual labour and, in particular, by locating a lot of that labour in the Global South.
LLMs make this process even more invisible because they don't require labelled data.
While an image classification AI needs lots of labelled images of cats, cars, people or whatever, LLMs are an example of 'self-supervised' learning.
The algorithms derive patterns from the data directly, without the need for human intervention.
However, they still need people involved because otherwise they will reproduce toxic output.
It's easy to get a raw LLM to produce racist or Nazi sentiments or to talk about rape, for example, because these kinds of ideas are lurking in the training data.
The new need for human labour is to find the ugly and unacceptable output in the LLM so that it can be censored, which means that people in poor countries have to review the worst kinds of output from an LLM, like child abuse, so that the rest of us don't have to see it.
More broadly, this also means that the companies get to decide what is 'unacceptable' output from an LLM, which is a very political form of power.
This doesn't just apply to obscene content but to historical facts and political ideas.
In a world where the route to knowledge passes through LLMs, the definitions of what is 'normal' and 'acceptable' is entirely under the control of tech corporations.
In your book, you discuss the relationship between AI and necropolitics, specifically how certain lives are rendered disposable. For example, how do algorithmic systems targeting migrants or workers reproduce this violence? What structural changes are necessary to prevent the deadly consequences of AI?
AI systems are necropolitical because they are an intensification of the ways peoples lives are already valued differently.
AI absorbs the social and structural hierarchies present in society and applies a utilitarian logic of optimisation to amplify them.
Applying this logic of optimised efficiency to marginalised lives inevitably results in a kind of 'lifeboat ethics' where many people are allowed to perish in order to save the valuable few.
In my opinion, it's no accident that the particular structure of AI that emerges in our society seems to be so anti-worker, when the very institutional and knowledge-generating structures that produced AI are predicated on a hatred and fear of the power of ordinary people.
Asking what structural changes are necessary to prevent the deadly consequences of AI is the same as asking what changes are needed to bring about social justice and to rein in the destruction of the environment.
As depressing as it is to watch the unfolding of AI's necropolitics, it also generates the possibility of convergence among social movements, because its harmful effects are felt across so many different areas of life.
It may be that AI, as the most condensed form of oppressive technology, also becomes one of the focal points of a coalition of resistance.
That's my motivation for developing ideas for the broad-based movement that I'm calling 'decomputing'
Without falling into the dichotomy of viewing the Trump era as bad and pre- or post-Trump periods as good, how should we think about the relationships between tech oligarchs like Musk, Zuckerberg, Pichai, and Bezos and fascist (or neo-fascist/new right) regimes, given their attendance at Trump’s inauguration? In the coming days, how might collaboration between these figures and Trump-like leaders shape the role of AI and related technologies in constructing oppressive, divisive, and authoritarian policies?
What we're seeing under Trump 2.0 is the full expression of the technopolitics I wrote about in 'Resisting AI'.
My book was a warning about the ways AI leans towards 'fascistic solutionism' and its connections to certain kinds of Silicon Valley ideologies like rationalism and neoreaction.
AI can't be relied on to give a useful answer to a factual question, but it will still give an answer no matter how conspiratorial or delusional the question is.
We can see how useful this is to the technofascist agenda in the way Elon Musk's DOGE team is using LLMs to decide which money to cut and who to fire, based on imaginary connections to hated concepts like 'diversity' and 'inclusion'.
As I wrote in 'Resisting AI', "Deep learning can cook up answers to deplorable questions that are not based on causality but which are only being asked in order to deepen problematic power relations".
In Turkey, due to the authoritarian regime, the health data of a significant portion of the population is centralized under state control. While this makes such data a major target for capitalist companies operating in AI, healthcare, and insurance, it also remains ineffective during pandemics and disasters. Here in Turkey, those producing this data, members of the Turkish Medical Association and the Chamber of Computer Engineers have come together to form a combined committee and discuss how worker oversight of data and AI applications should be conducted. To my knowledge, a similar debate occurred in the UK concerning DeepMind's access to NHS data. How are current debates around the centralization and use of health data unfolding in the UK?
The situation in the UK is bad and getting worse.
The previous Conservative government awarded the contract for the central NHS data store to Palantir, a company whose main business is analysing data for US intelligence agencies and the military.
Rather than a new Labour government bringing a change of direction, they have completely bought in to the idea that AI is core to their mission of 'growth'.
As a result, they're introducing legislation that makes it easier to share health and other data with AI companies for profit, and makes it harder to complain about an algorithm making an automated decision about your life.
The health system in the UK is at the point of collapse and requires investment in people and infrastructure, but the government is obsessed by the idea that only AI can save us.
As you pointed out in your book, the spread of AI across various industries leads not so much to job loss as to an increase in precarious and routinized work. At the same time, we are witnessing a period where workers developing AI and related technologies are increasingly distanced from control over the software and technologies they produce. The use of these technologies for corporate profits, surveillance, destruction, and occupation exacerbates this situation further. Under such conditions, is it possible to develop solidarity and counter-strategies on a smaller scale—for instance, between AI developers and data labeling workers, or between software engineers and Amazon warehouse and delivery workers? Could these kinds of micro-level collaborations evolve into a broader-scale initiative, akin to a new Lucas Plan, aimed at reclaiming decision-making power in production processes and ensuring secure working conditions? If so, how could such a solidarity and organizational model be constructed?
The possibility of such solidarity always exists and, as I wrote in the book, there are great examples of it in recent history.
My current feeling is that it is triggered by moments of crisis, whether small or large.
The inertia of existing conditions tends to hold us in place as subjects, with the boundaries of our imagination set by what is possible under neoliberalism.
However, people can be jolted into new forms of action by events that break through the status quo.
Even the Lucas Plan was triggered by the threat of imminent redundancies in the Lucas factories.
It's impossible to know where these triggers will come from;
looking at contemporary events, it could be a shared anger at mass deportations or the need to recover and repair after a natural disaster induced by climate change.
Whatever form it takes, history tells us that some people need to be doing the work of preparation in the background so that both structures and concepts are ready for the moments of imminent collapse.
This is the work of grassroots union organising, of community-based anti-fascism, of all of the activities that start from solidarity and mutual aid.
For me, the important thing is that this activity is understood as having the goal of transformation not just mitigation.
It's vital to reimagine alternative futures and to propose possible worlds that could replace the unending social violence of the present.