From Transcendence to Dependence - Who Do We Become When We Outsource Ourselves?
Transhumanists imagined uploading our minds into machines as the ultimate dream of transcendence. Whilst that imaginary may never materialise, in a quieter, more everyday way, one could say we are already doing it by outsourcing our memory, our reasoning, even our empathy to AI. The result isn’t immortality or superintelligence, but something stranger: a slow hollowing-out of the very faculties we once thought we had a duty to cultivate.
This provocation comes after reading a recent MIT Media Lab study with the unsettling title Your Brain on ChatGPT. The researchers found that people who leaned on large language models to help with essay writing gradually showed weaker originality, less recall of their own work, and even measurable cognitive strain. What struck us was the paradox. AI is sold to us as an enhancer of human potential — a tool to make us sharper, more productive, more insightful. Yet here was evidence suggesting the opposite, that our mental capacities might be thinning out precisely when we outsource it to a machine. It got us thinking about our own experiences too — the times when AI has felt like an amplifier, accelerating our ideas, but also the moments when we realized we were skipping the hard work of grappling with them ourselves. That paradox foregrounds a provocation: if AI is meant to enhance us, why does it so often erode the very faculties we have a duty to develop?
The Moral Duty to Develop Our Faculties
We’ve long been told that morality isn’t just about treating others fairly. It’s also about cultivating ourselves. Philosophers like Aristotle, Kant, John Stuart Mill and Motsamai Molefe have all insisted that we have a duty to develop our faculties: our reason, our intellect, our imagination, even our capacity for empathy. To neglect those capacities is, they argued, to neglect what makes us human.
For these philosophers, the moral duty to develop our faculties is shared work and never just an individual affair. While the individual must exercise and refine their capacities, the community and the state must create the conditions for that cultivation — through, for example, law, education and shared moral life. Aristotle saw the polis as the space where virtue is formed; Kant thought the state should secure conditions for freedom and the public use of reason. Mill argued that public education should open, not constrain, individuality; and Molefe reminds us that personhood itself is realized through community. The duty to develop our faculties, then, is not merely self-improvement, but a moral ecosystem that we build together.
But what happens when the technologies we design, start doing the cultivating for us, or worse, replacing it?
The Temptation Of Outsourcing
Tools like ChatGPT feel like cognitive Swiss army knives: they summarize, draft, problem-solve, and explain. AI tools like ChatGPT aren’t just answering trivia questions. They are taking over a wide range of human faculties. Why recall facts when a chatbot will pull them up instantly (memory)? Why wrestle with a tricky argument when the machine can produce a fluent one on demand (reason)? Why stare at a blank page when the system can generate endless ideas or even entire stories (creativity)? Why struggle with an ethical dilemma when AI can summarize “what most people say” or list pros and cons (moral discernment)? And much like Wikipedia before it, these technologies give us information so instantly and fluently that it’s hard not to take their word for it. The path of least resistance is to outsource our thinking: why wrestle with an idea when the machine can do it for us effortlessly?
Or can it? When we wrestle with language, it’s usually not because the language itself is difficult, but because the words we use connect to emotion, desire, and imaginations. Take the example of a grievance message for instance – say, an email that you send to a colleague who behaved in a way that was hurtful. Such a message isn’t just about transmitting facts. It’s about finding words that convey pain, frustration, or dignity in a way that others can feel. That process is part of the moral work of empathy: imagining how the other will hear you, wrestling with your own emotions, deciding what tone carries both truth and respect. That work also links to ethics and integrity – as we write, we have to decide whether to retaliate by using words that aggrieve (but hurt us in the process), to extend a peace offer (and retain the moral high ground), demand justice or take a different route altogether. The words we use are not just words; they reflect our moral worlds and human emotions.
Now imagine outsourcing that task to a chatbot. The system can generate a polished, professional-sounding message in seconds. Efficient, yes. But what happens to the human work of translating raw emotion into moral expression? If machines take over those moments, we risk losing practice in empathy itself — not just the skill of writing, but the deeper capacity to engage with our own and others’ feelings.
In their paper, Kosmyna and co-authors only explored brain activity using EEG signals, finding that brains are more active when they generate original essays. Whilst the long-term, structural effect of their observations on the neural pathways of the brain are not known, could it be possible that the areas of our brain that correspond to morally relevant traits – like compassion, empathy and reasoning – become similarly unpractised? And that bit by bit, we risk atrophying the very human capacities that we once thought we had a duty to sculpt and strengthen? And if that means we think less, remember less, and reason less, is that just a personal failing? Or does it point to something bigger?
Rethinking Responsibility
It’s tempting to frame the risk of “cognitive decline” as an individual moral failing. After all, can’t we just choose to use ChatGPT responsibly and treat it as a partner in thought rather than a replacement?
It’s easy to say this is a problem of self-control: don’t lean on the tool too much; keep your faculties sharp. But this ignores how design pushes us. Tools optimized for speed and fluency nudge us toward passivity. Defaults that give us polished answers discourage questioning. Interfaces that hide sources or reasoning steps make critical thought harder. Addictive algorithms that tap into the brain’s innate reward systems, mean we constantly come back for more. We shouldn’t be surprised when people take the easy path. The tools are built to make outsourcing the natural option.
If cultivating our faculties is a moral imperative, should designers share that responsibility? The way AI is built shapes whether we use it to exercise or replace our capacities. Imagine if chatbots didn’t just hand over answers but nudged us to compare, question, and reflect. The moral work of cultivation would be built into the design.
From Responsibility to Accountability
Beyond individual and design responsibility lies the harder question of accountability. In addition to asking how we might use these tools responsibly, we must also ask who shapes them, who profits from them, and who makes our dependence feel inevitable. If corporations engineer, en masse, a populace increasingly dependent on AI tools that dull their faculties in the pursuit of profit, then the moral failure is not just personal but structural. Here, perhaps something could be learned from how empires have historically manufactured dependency. In Smoke and Ashes: Opium’s Hidden Histories, Amitav Ghosh reminds us how the British Empire deliberately cultivated opium addiction in China as a strategy of commercial warfare and domination — creating demand, suppressing resistance, and cloaking exploitation in the moral gloss of “free trade”, “progress” and “civilisation”. The comparison isn’t literal, but the logic is familiar: dependence as design, the erosion of faculties and reshaping of desire as profit, and moral justification as cover. On face value, the contexts differ profoundly. The opium trade was a project of imperial domination that destroyed bodies and nations, while today’s AI economies seem to target cognition and attention rather than flesh. Yet the boundary between cognitive and material harm is porous. In some domains, AI systems do more than dull our capacities: they distribute violence through wrongful arrests by facial recognition, biased welfare scoring, or algorithmic amplification of hate. Perhaps the question is not only how we use these tools responsibly, but how we hold to account the systems that profit from our unthinking use and make violence scalable.
The Humans Our Tools Are Training Us To Be
If we are outsourcing memory, reasoning, creativity, and even moral judgment to AI, the risk isn’t only cognitive decline. It is the slow normalization of dependency, built into the design of our tools. And when those tools begin to mediate not just thought but harm—reproducing bias, scaling violence, shaping what we see and feel—the question can no longer rest on individual restraint. We need to ask not just what kind of human beings our technologies are helping us become, but what kinds of worlds they are quietly building in our name. The challenge, perhaps, is to imagine tools that return us to the work of cultivation: of thought, empathy, and responsibility, rather than relieving us of it.