The Global Empire of AI: Whose Vision of the World Is Taking Hold?
When we think about artificial intelligence (AI), we may think of it as an ethereal technology—as something that exists somewhere "in the cloud," that is intangible, elusive, and not really of this world. When we consider AI in that way, we also start to imagine it as something that cannot be held – in the same way that one cannot hold a cloud – and, perhaps, as something that no-one can really own or be held accountable for. And yet, AI has a material reality – it depends on energy- and water-hungry data centers, critical and rare earth minerals, underwater cables and much more. More than that, AI is not just a set of technologies; it is a political project—one that advances a narrow vision of what makes a life worth living, reducing human flourishing to metrics of productivity, efficiency, and optimization. These were some of the themes we explored at the launch of Rachel Adams’ new book, The New Empire of AI: The Future of Global Inequality, which unpacks how AI is entrenching new forms of empire, shaping whose knowledge, labor, and values matter in a world increasingly governed by algorithms and Big Tech.
Last week, the EthicsLab partnered with the UCT AI Initiative and the Vice-Chancellor of UCT, Prof Mosa Moshabela, to celebrate this new book by honorary research associate to the EthicsLab, Dr Rachel Adams. In her book, she draws out the parallels and continuities between the historical process and project of empire building – its rationales and end products – and the way in which AI technologies are being rolled out. She presents AI as a fusion of the old and the new, extending colonial logics of morality, extraction, dependency and governance, while introducing new mechanisms of control that operate non-territorially, at scale, and through computation.
Through a series of logics by which AI is presented as the ‘one technology that can do it all’, AI is often justified as a force for good – as described for instance by Youssef Hassan in the context of AI for Development discourses on the African continent. Yet at the same time, its exploitative underpinnings are masked—just as colonial discourses like the civilizing mission and development once obscured their own exploitative foundations. For instance, AI-based technologies rely on the extraction of natural resources, data, and (digital) labour to build (cloud) infrastructures that may perpetuate violence and deepen economic dependencies. Much like empires ruled through bureaucratic and racial classifications, AI fosters algorithmic rule, imposing bureaucratic control mechanisms that shape political, social, and economic identities. AI governance extends the colonial logic of divide and rule by fragmenting populations through algorithmic classification, reinforcing economic and social hierarchies. Equally, empire has always been a deep entanglement between the state and private capital. While corporations often undermine state regulation, empire—both historically and today—remains fundamentally a public-private partnership. For instance, Cory Doctorow has spoken about states being permissive of Big Tech companies precisely because they could see those as ‘potential deputies for their own exercises of power’.
Yet, as Adams shows in her book, while AI shares structural similarities with iterations of empire as a historical formation, its dimensions of power are also fundamentally new. It is an almost remote form of governance not tied to physical conquest, but premised on data, computation, and digital infrastructure. This empire also extends to everyday interactions and is deeply intimate, shaping how individuals make decisions, experience emotions, and navigate their digital and social worlds. Ultimately, it also produces new geopolitical and economic divides based on premised on ownership and access to compute power, training data, and algorithmic control.
Central to the analysis of AI as an empire is the question of power – both the desire to amass power and an analysis of its effects. In terms of a desire to amass power, Adams describes the intersection of ideas of AI as a superpower in itself – as something that should be pursued and capitalized upon – as motivating nation states’ investment in this technology. Perhaps more nefariously, she also links the articulation of concerns about AI as an existential threat to humankind, with a geopolitical power struggle between the United States and China, positioning claims about AI’s possible harm in the context of attempts by both those nations to control this technology.
Importantly, she encouraged us to be skeptical about AI and the claims that surround it – skeptical precisely about the presentation of AI as ‘the one technology that can do it all’ – the technology that can solve poverty and climate change, that can improve all of our living standards and not just those of the lucky few, that can cure all our illnesses and answer all of our questions.
Positioning AI technologies as the next iteration of empire, Adams reminded us that the end product of European colonialism was the establishment of a deeply unequal world, and that this was deliberate. If we think about AI as worldmaking in this way, then it is clear that what we need next is to critically reflect on the kind of world we want to create – and on providing the evidence of how AI is currently shaping power relations and inequality in the world today, critiques thereof, as well as concrete proposals and policy interventions that revert or subvert this larger political project at play. One example is the way in which jurisdictions, institutions and civil society organizations around the world are pushing back on ideas that data should flow freely around the globe, and that instead they should be considered resources and assets that should be protected – a process that Sebastián Leheudé calls ‘data turbulence’.
The world that the processes of colonization, industrialization, modernization and globalization have created, is one that serves the few at the cost of many. It is one that has brought our planet to a brink and has caused massive depletion of biodiversity. It has constituted marked processes of othering and dehumanization, and it has caused many to lead lives of hardship, depravation and injustice. Adams’ analysis demonstrates that on the current trajectory, AI technologies will entrench these patterns which should concern us all.
Yet at the EthicsLab, we are interested in imagining a different world, and in interrogating critically what role ethics could play in dismantling empire (whilst we are under no illusions as to the role it has played in upholding empire). Is it possible to imagine a technologically enhanced world that serves different values? If AI is currently driven by profit and the accumulation of power, could we imagine these technologies working for the public interest? Could they work to promote care, solidarity and justice? Could they help us protect the most vulnerable people and our environment? Adams’ stark analysis – and the book launch – provoked these powerful questions and in that way, helped articulate an urgent research agenda for AI Ethics scholarship.