Repair as Translation: Thinking AI Ethics from Salvage Worlds

02 Dec 2025
02 Dec 2025

Recent conversations in AI ethics increasingly acknowledge that questions of bias, privacy, and accountability cannot be separated from the material infrastructures that make digital systems possible. Energy, minerals, labour, waste, and toxicity sit uneasily at the margins of ethical frameworks that continue to privilege data and algorithms as their primary objects of concern. A recent EthicsLab webinar with anthropologist Samwel Moses Ntapanta prompted reflection on what it would mean to translate ethnographic work on repair, reuse, and salvage into an AI ethics agenda in Africa, and what such a translation would require of AI ethics itself.

Ntapanta’s scholarship is not an intervention in AI ethics as such. His book Gathering Electronic Waste in Tanzania: Labour, Value and Toxicity offers a sustained ethnography of e-waste economies, repair practices, and informal labour in Dar es Salaam. It documents how discarded electronics are gathered, dismantled, repaired, and resold, and how value and risk are produced through these processes. The book shows how people working with e-waste navigate toxicity, economic uncertainty, and infrastructural absence as part of everyday life, and how maintaining technological viability becomes inseparable from sustaining livelihoods. Rather than advancing normative claims, the ethnography describes how judgement, responsibility, and exposure to harm are worked out in practice under conditions of constraint.

For AI ethics, the question is not whether such work can be absorbed into existing frameworks, but what it would mean to take these sites seriously as sources of ethical insight. Translation, in this sense, is not a matter of reframing repair as a metaphor for sustainability or better governance. It is a methodological problem that forces attention to how AI ethics defines ethical reasoning, where it locates it, and whose experiences it treats as conceptually significant.

Much of the critical literature on AI ethics has already begun to address materiality. Scholars have traced the environmental costs of data centres, the extractive politics of mineral supply chains, and the planetary consequences of large-scale computation. Kate Crawford’s Atlas of AI, for example, situates AI within infrastructures of extraction, labour, and disposal, arguing that AI systems are material and ecological formations rather than abstract computational ones. Similar concerns appear in work on platform economies, energy-intensive computation, and digital colonialism. Yet even as this material critique expands, AI ethics often remains oriented toward upstream moments of design, governance, and regulation, where ethical reasoning is assumed to take place in anticipation of harm.

This is where the difficulty of translation becomes clear. Ntapanta’s ethnography does not argue for an ethics of repair, nor does it present repair practices as moral models. What it shows instead is how people involved in repair and salvage make ongoing judgements under conditions shaped by toxicity, limited tools and resources, and responsibilities to others. Decisions about what to fix, dismantle, or discard involve weighing risk, responsibility, and survival in environments where technological breakdown and exposure to harm are normal features of everyday life.

Taking these practices seriously would require AI ethics to broaden its understanding of ethical reasoning. Much AI ethics work focuses on preventing future harm through better design or regulation. Ntapanta’s work draws attention to ethical judgement that takes place when prevention has already failed, and when people must decide how to live with the consequences of technological systems rather than how to avoid them altogether. This does not replace existing ethical concerns, but it complicates them by foregrounding, among other things, unequal exposure to risk.

This complication is especially salient in African contexts, where AI ethics agendas are often articulated through national strategies, governance frameworks, and global declarations. While these initiatives play an important role, they risk reproducing a gap between ethical discourse and the lived realities of technological afterlives. Repair and salvage practices sit downstream of innovation, but they are central to how technological systems persist, adapt, and distribute burdens unevenly across the globe. Treating them as peripheral reinforces an understanding of ethics that remains detached from material consequence.

Thinking with Ntapanta’s work suggests that translation into AI ethics requires more than expanding the list of ethical concerns. It requires a shift in how ethical reasoning itself is understood. Rather than locating ethics primarily in anticipation, abstraction, and control, it invites attention to forms of judgement shaped by maintenance, survival, and ongoing exposure to harm. The question, then, is not how repair can be added to the AI ethics agenda in Africa, but whether that agenda is prepared to learn from practices that unsettle dominant assumptions about innovation, progress, and responsibility, and to allow those practices to reshape what counts as ethical reasoning in the first place.

 

Watch the recording of the webinar: