Skip to content

All that glitters?! Intersectional Perspectives on AI

all that glitters, intersectional perspectives on ai blogpost

Last week, I gave a talk at the Omnipresent and Hidden workshop at Humboldt Institute for Internet and Society. Next to Mophat Okinyi, who talked about tech workers’ and content moderators’ rights, and Raghavendra Selvan, who talked about the environmental impacts of AI, I focused my talk on the value of intersectional feminist approaches to tackle the various injustices of AI. This is the written version of the talk. A big thank you to Theresa Züger and Jan Distelmeyer for the invite.


I have a confession to make: I have worked on AI for many years, but equally, for many years, it feels like a topic that is kind of haunting me. If I could choose, I’m not sure if I would work on it at all. But it’s here, everyone is talking about it, there is a lot of money in it, some people are afraid of it, some people are excited about it, some people try to avoid it, and some people just use it daily.

So, here I am again, talking about AI, because it is advancing rapidly, and by now poses serious threats to society and the environment. It can’t be ignored. However, I want to propose a more radical perspective on AI in this talk today. After having focused a lot of work on the mechanisms that could make AI systems better, more inclusive, or, causing less environmental harms, today I want to ask:

Do we need AI at all? And if not, what could structural resistance look like?

If we consider the history of technology, new innovations have always come with a lot of promises: For example, the washing machine: It was advertised with the promise that it would save women time for washing, and therefore create more freedom. Or the microwave, that came with the promise to save time for cooking. Or the bicycle, that promised independence and mobility. Looking at the data nowadays, these promises were not really held: women still do the majority of house work, their mental load has stayed the same, and on top of that they now also need to do wage labour because of the rise of cost-of-living. Maybe some time was saved with these innovations, but standards have also increased.

The rise of the internet brought the promise of a genderless and therefore non-hierarchical space. In the documentary Visions of Heaven and Hell (1994) Lili Burana, editor of the Future Sex magazine, says: 

“It gives you the opportunity to go into cyberspace, and change not just your physical appearance, but your sexual orientation or your gender. And that is such an incredible possibility for people — I don’t see how that could be anything less than irresistible.”

Today, the internet is an extremely hostile space, digital violence has risen, mis- and disinformation have increased, and algorithms on social media platforms now favor apolitical content about cooking, pets, and sports. According to a poll for the New Britian Project, two-thirds of 16- to 24-year-olds think social media does more harm than good and three-quarters now want tougher regulation to protect younger people from it. All these examples show: while promises of technologies were big, many could not be held, especially for minorities. Some things even got worse. 

Something similar is now happening with “AI”, a very unspecific term that doesn’t really allow for the granularity of the conversation (but, that’s a different talk). With AI, there is a specific rhetoric that comes up in many rooms and conversations. It sounds like this:

‘Artificial Intelligence is advancing fast. We need to figure out how to use it meaningfully, because it can be both: good, or bad. That’s why we need to make the bad parts good.’

Having worked on AI from both a feminist and an environmental justice perspective for many years, I can say with confidence that this argument is flawed. It seems that because there are good parts, the bad parts are acceptable and the goal is to minimize them. Saying AI is good and bad does not question the very existence of AI at all. It kind of implies that it will balance itself out; that we need to know more, before we can make an assessment; that we should look at it from a neutral position, assessing both potentials and harms; In their paper ‘Discourses of Climate Delay’, Lamb et al. write about the fact that technologies are often used to distract from larger political questions. I want to argue that, to question whether AI is good or bad, is part of this political distraction: It’s a strategy to divert attention from the structures of power that are actually in place.

Because from a feminist perspective, the harms of AI by now largely outweigh the benefits.

The training of AI does not happen in a vacuum, there are real people, mostly from the majority world, that are paid very little and find themselves in precarious work situations. Their labor is made invisible. The environmental impacts of AI are undeniable: from water use, carbon emissions over sound pollution of data centers to intensive resource extraction, AI is extremely harmful for the planet. Digital technologies cause more emissions than the entire aviation industry

At the same time, the advances of AI are not even socially accepted. A recent poll from Beyond Fossil Fuels shows that the majority of Europeans polled across five different countries want more rules to limit the impacts on energy, water and the economy of data centers.

AI is also used in warfare, mass surveillance, border controls and genocides, targeting individuals and obfuscating accountability of those employing these technologies, research from Human Rights Watch, Article 19 and Amnesty International shows. So why, in so many rooms, one of the key sentences is still: ‘AI can be either good, or bad. So let’s make the bad parts better’. Or how, to reference Michelle Thorne,

Do we dismantle and challenge the  ‘AI imperative’ in the first place?

The only way to answer this question can come from an intersectional perspective. Intersectionality, a term coined by Kimberlé Crenshaw, shows how different identities, such as race, nationality, gender, or class intersect and create different inequalities. Intersectional feminists know how to ask the difficult questions, they know how to analyze power. They focus on structures, not individuals. And exactly this is needed for AI. An intersectional perspective allows us to ask: Whose interests are served? Who is impacted, who remains invisible?

Let’s find some answers. If we ask: Whose interests are actually being served with the advancement of AI? Who benefits from it?

We can see the winners very clearly: AI now is the only reason why the United States still obtains economic growth. Without AI, the US-American economy would literally stagnate, according to several articles of the last few days, for example by Michael Roberts. AI (and therefore, data centers) are the biggest export of the USA, and the only way to mask the cost-of-living crisis there. And, of course, also large corporations benefit from it. Consultancies such as McKinsey and Boston Consulting Group estimated 20-40% of their revenue in 2024 on generative AI applications. AI is also creating new billionaires at a record pace, according to a recent CNBC article by Robert Frank. The accumulation of wealth is getting more extreme, thanks to AI.

Let’s find an answers to the second question: Who is impacted by AI, and who remains invisible?

AI needs rare earth materials. In a session organized by Global Circuits / Green Screen Coalition I learned recently from Maurice Carney from Friends of the Congo that Congo is actually one of the world’ s richest countries in terms of resources and that the political instability has benefited unregulated extraction or rare earth materials. Whether it was back in the days as a Belgian colony to extract cobalt for bicycles, or the uranium that was sourced there for nuclear bombs, or now the lithium for AI. The impacts are undeniable, but they remain invisible. Electronic waste is the fastest growing waste stream in the world. So if you look, you will find the communities that are impacted and often remain invisible.

An intersectional lens allows for exactly this level of analysis. It allows us to build alliances. It allows us to focus on economies of care, rather than economies of scale. It allows us to value repair, not the shiny and the new. It allows us to go from, like Logic magazine put it brilliantly, “moving fast and breaking things” towards “moving slow and healing things”; it allows for collective resistance;

If we consider AI from intersectional perspectives, we can shift the focus from it being an imperative, a technological tool, a neutral analysis between good or bad, a question of efficiency and optimization — towards a focus on AI as a political instrument, a harmful innovation, and a belief system that upholds the structures of power that should actually fall apart.