A talk given at the launch of the 'All Access AI' network at Goldsmiths, Unversity of London 1st April 2019.
This talk refers to a text developed for Propositions for
Non-Fascist Living: Tentative and Urgent, published by BAK, basis voor actuele kunst and MIT Press
(forthcoming November 2019).
intro
This talk is about some pressing issues with AI that don't usually make the headlines,
and why tackling those issues means developing an antifascist AI.
When I talk about AI i'm talking about machine learning and about artificial neural networks, also known as deep learning.
I'm addressing actual AI not a literary or filmic narrative about post-humanism.
AI is political.
Not only because of the question of what is to be done with it, but because of the political tendecies of the technology itself.
The possibilities of AI arise from the resonances between its concrete operations and the surrounding political conditions.
By influencing our understanding of what is both possible and desirable it acts in the space between what is and what ought to be.
concrete
Computers are essentially just faster collections of vacuum tubes.
How can they emulate human activities like recognising faces or assessing criminality?
Think about a least squares fit; you're trying assess the correlation between two variables by fitting a straight line to scattered points,
so you calculate the sum of the squares of distances from all points to the line and minimise that.
Machine learning does something very similar.
It makes your data into vectors in feature space so you can try to find boundaries between classes
by minimising sums of distances as defined by your objective function.
These patterns are taken as revealing something significant about the world.
They take on the neoplatonism of the mathematical sciences;
a belief in a layer of reality which can be best perceived mathematically.
But these are patterns based on correlation not causality;
however complex the computation there's no comprehension or even common sense.
Neural networks doing image classification are easily fooled by strange poses of familiar objects.
So a school bus on it's side is confidently classified as a snow plough.
Yet the hubristic knights of AI are charging into messy social contexts,
expecting to be able to draw out insights that were previously the domain of discourse.
Deep learning is already seriously out of its depth.
callousness
Will this slow the adoption of AI while we figure out what it's actually good for?
No, it won't; because what we are seeing is 'AI under austerity',
the adoption of machinic methods to sort things out after the financial crisis.
The way AI derives its optimisation from calculations based on a vast set of discrete inputs
matches exactly the way neoliberalism sees the best outcome coming from a market freed of constraints.
AI is seen as a way to square the circle between eviscerated services and rising demand
without having to challenge the underlying logic.
The pattern finding of AI lends itself to prediction and therefore preemption
which can target what's left of public resource to where the trouble will arise,
whether that's crime, child abuse or dementia.
But there's no obvious way to reverse operations like backpropagation to human reasoning,
which not only endangers due process but produces thoughtlessness in the sense that Hannah Arendt meant it;
the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out.
The usual objection to algorithmic judgements is outrage at the false positives,
especially when they result from biased input data.
But the underlying problem is the imposition of an optimisation based on a single idea of what is for the best,
with a resultant ranking of the deserving and the undeserving.
What we risk with the uncritical adoption of AI is algorithmic callousness,
which won't be saved by having a human-in-the-loop
because that human will be subsumed by the self-interested institution-in-the-loop.
By throwing out our common and shared conditions as having no predictive value,
the operations of AI targeting strip out any acknowledgement of system-wide causes
hiding the politics of the situation.
far right
The algorithmic coupling of vectorial distances and social differences
will become the easiest way to administer a hostile environment,
such as the one created by Theresa May to target immigrants.
But the overlaps with far right politics don't stop there.
The character of 'coming to know through AI' involves reductive simplifications
based on data innate to the analysis,
and simplifying social problems to matters of exclusion based on innate characteristics
is precisely the politics of right wing populism.
We should ask whether the giant AI corporations would baulk at putting the levers of mass correlation
at the disposal of regimes seeking national rebirth through rationalised ethnocentrism.
At the same time that Daniel Guerin was writing his book in 1936 examining the ties between fascism and big business,
Thomas Watson's IBM and it's German subsidiary Dehomag were enthusiastically furnishing the nazis with Hollerith punch card technology.
Now we see the photos from Davos of Jair Bolsonaro seated at lunch between Apple's Tim Cook and Microsoft's Satya Nadella.
Meanwhile the algorithmic correlations of genome wide association studies
are used to sustain notions of race realism and prop up a narrative of genomic hierarchy.
This is already a historical reunification of statistics and white supremacy, as the mathematics of logistic regression and correlation that are so central to machine learning
were actually developed by Edwardian eugenicists Francis Galton and Karl Pearson.
antifascist
My proposal here is that we need to develop an antifascist AI.
It needs to be more than debiasing datasets because that leaves the core of AI untouched. It needs to be more than inclusive participation in the engineering elite because that, while important, won't in itself transform AI.
It needs to be more than an ethical AI, because most ethical AI operates as PR to calm public fears while industry gets on with it.
It needs to be more than ideas of fairness expressed as law, because that imagines society is already an even playing field
and obfuscates the structural asymmetries generating the perfectly legal injustices we see deepening every day.
I think a good start is to take some guidance from the feminist and decolonial technology studies
that have cast doubt on our cast-iron ideas about objectivity and neutrality.
Standpoint theory suggests that positions of social and political disadvantage can become sites of analytical advantage,
and that only partial and situated perspectives can be the source of a strongly objective vision.
Likewise, a feminist ethics of care takes relationality as fundamental,
establishing a relationship between the inquirer and their subjects of inquiry
would help overcome the onlooker consciousness of AI.
To centre marginal voices and relationality,
I suggest that an antifascist AI involves some kinds of people's councils,
to put the perspective of marginalised groups at the core of AI practice
and to transform machine learning into a form of critical pedagogy[^n18].
This formation of AI would not simply rush into optimising hyperparameters
but would question the origin of the problematics,
that is, the structural forces that have constructed the problem and prioritised it.
AI is currently at the service of what Bergson called ready-made problems;
problems based on unexamined assumptions and institutional agendas,
presupposing solutions constructed from the same conceptual asbestos.
To have agency is to re-invent the problem,
to make something newly real that thereby becomes possible
unlike the probable, the possible is something unpredictable, not a rearrangement of existing facts.
Given the corporate capture of AI, any real transformation will require a shift in the relations of production.
One thing that marks the last year or so is the sign of internal dissent in Google, Amazon, Microsoft, Salesforce and so on
about the social purposes to which their algorithms are being put.
In the 1970s workers in a UK arms factory came up with the Lucas plan
which proposed the comprehensive restructuring of their workplace for socially useful production.
They not only questioned the purpose of the work but did so by asserting the role of organised workers,
which suggests that the current tech worker dissent will become transformative
when it sees itself as creating the possibility of a new society in the shell of the old.
I'm suggesting that an antifascist AI is one that take sides with the possible against the probable,
and does so at the meeting point between organised subjects and organised workers.
But it may also require some organised resistance from communities.
A thread is a sequence of programmed instructions executed by microprocessor.
On an Nvidia GPU, one of the AI chips, a warp is a set of threads executed in parallel.
How uncanny that the language of weaving looms has followed us from the time of the Luddites to the era of AI.
The struggle for self-determination in everyday life may require a new Luddite movement,
like the residents and parents in Chandler, Arizona who have blockaded Waymo's self-driving vans
'They didn’t ask us if we wanted to be part of their beta test' said a mother who's child was nearly hit by one
The Luddites, remember, weren't anti-technology but aimed 'to put down all machinery hurtful to the Commonality'.
The predictive pattern recognition of deep learning is being brought to bear on our lives with the granular resolution of Lidar.
Either we will be ordered by it or we will organise.
So the question of an antifascist AI is the question of self-organisation,
and of the autonomous production of the self that is organising.
Asking 'how can we predict who will do X?' is asking the wrong question.
We already know the destructive consequences of on the individual and collective psyche
of poverty, racism and systemic neglect.
We don't need AI as targeting but as something that helps raise up whole populations.
Real AI matters not because it heralds machine intelligence
but because it confronts us with the unresolved injustices of our current system.
An antifascist AI is a project based on solidarity, mutual aid and collective care.
We don't need autonomous machines but a technics that is part of a movement for social autonomy.