Self-Driving Cars & Care Bots: Why ‘Independence’ Beats ‘Autonomy’ in AI Ethics
Photo by Andrew Butler on Unsplash
Can a self-driving car truly be ‘autonomous’? What does it mean for an AI to ‘govern itself or for a so-called ‘autonomous weapon’ to make life-or-death decisions? Autonomy, deriving from the Greek words αὐτός (autos), meaning self and νόμος (nomos), meaning law, literally means self-law. This can also be interpreted as self-governance, self-legislation and even self-determination. For Immanuel Kant (1724-1804), autonomy is the “property of the will by which it is a law to itself (independently of any property of the objects of volition)” (G, 4:440), or, since any law must be universal, the condition of an agent who is “subject only to laws given by himself but still universal” (G, 4:432). But does this definition hold for machines? Or are we conflating independence mere freedom from external control with genuine autonomy?
What does this mean?
This means that autonomy, in Kantian terms, is the ability to act according to a law that one gives oneself rather than being driven by external influences or coercion. Whenever the term autonomy is brought up, some associated terms like morality, responsibility and accountability are not left behind. This is because whenever an entity is said to be autonomous, it is also believed that the same entity should be willing or capable of acting morally, be able to take responsibility for its actions and be held accountable.
Let’s take a human being, for example.
Humans are often regarded as uniquely autonomous due to their capacity for rational self-legislation, moral responsibility, and authentic agency, traits that are either absent or fundamentally different in most non-human entities. Humans possess this ability because they can reflect on their actions, create ethical principles, and take responsibility for their choices. While some animals demonstrate rudimentary forms of agency, moral behaviour, or even self-regulation such as primates, cetaceans, and social mammals, human autonomy remains distinct in its depth of abstract reasoning, normative self-governance, and the ability to justify actions through universal principles.
In contrast, robots and AI, no matter how advanced, currently operate within the confines of algorithms and external programming, meaning they lack the genuine Free Will necessary for autonomy. The Beauchamp and Childress theory of autonomy is premised on the thesis that the choices of a competent entity are autonomous if they are intentional, are demonstrative of comprehension and Free Will and are not influenced by internal or external factors (Beauchamp and Childress 2013:17). The above being the case, it can then be argued that true autonomy presupposes that the entity said to be autonomous understands the principle of blame and praise, where such an entity is blamed and bears consequences for actions that harm others or are praised for those that are worthy of praise, a capacity rooted in consciousness, ethical reasoning, and intentional self-governance. However, if AI were to achieve some form of machine consciousness, a self-aware system capable of interpreting moral norms and revising its own ethical framework, without doubt, the debate would shift from where it currently is to seeking if these non-human entities excluding animals, ever meet the threshold for autonomy. But for now, this remains speculative, but it forces us to refine what we mean by ‘autonomy’ and whether it must be biological in origin. Also, this still allows us to argue that as of current, autonomy cannot be extended to robots or AI machines because their actions are merely an execution of programmed responses without understanding or moral concern. While AI may exhibit independent functionality, allowing it to operate without human intervention, it does not, based on the argument I presented above, have the capacity to make self-determined ethical choices. A self-driving car, for example, follows pre-programmed traffic rules but does not deliberate on the moral weight of its actions. So, such robots, or advanced AI machines, should not be said to possess autonomy but rather be said to be independent. This is because independence does not have much weight put on it and lacks the same moral consideration autonomy has.
Derived from the Greek word αν (an), meaning not or without, and εξάρτηση (exártisi), meaning dependence or reliance, independence literally means without dependence, referring to freedom from external control, without necessarily implying self-rule or moral responsibility. Let’s take the example of the care robots that are assigned to assist elderly individuals in nursing homes. Even though these robots can perform various tasks, from reminding patients to take medication to monitoring vital signs, they are not autonomous in a strict philosophical sense. These robots cannot decide based on moral reasoning whether reminding a patient to take a medication is the right thing to do or not. They just operate on pre-programmed instructions and respond based on algorithms and not out of free will.
So in a nutshell, here is my argument:
- P1: Autonomy requires self-legislation, moral responsibility, and genuine free will.
- P2: Humans possess these qualities because they can reflect, create ethical principles, and be held accountable for their actions.
- P3: Robots and AI lack self-legislation, moral responsibility, and free will because their actions are determined by programming and external inputs.
- C: Therefore, Robots and AI machines, cannot be said to have autonomy and cannot be called autonomous but should instead be described as independently functional rather than autonomous.
As I conclude, I argue that, when it comes to autonomy and independence, the distinction goes deeper than semantics to shaping the fabric of ethical and legal frameworks. Calling AI “autonomous” is, at best, metaphorical and, at worst, philosophically misleading. If we mislabel AI as autonomous, we risk abdicating human responsibility for their actions, whether in warfare, healthcare, or governance. AI inventions are but tools used by humans for human convenience and not autonomous entities which can take ethical and moral decisions freely without any algorithmic influence. True autonomy demands more than complexity; it requires a mind capable of answering for itself, an entity ready or not to take blame or praise for its actions, a capacity amongst living organisms humans uniquely have. Or at least as far as we know.