By Paul Ewans
Paul is a member of Humanists UK and Humanists International. He is also a trustee of the Uganda Humanist Schools Trust. In this article he suggests that AI systems in the form of androids will eventually become moral agents capable of acting both ethically and unethically.
Let us call an entity that is capable of acting ethically or unethically a ‘moral agent’. We tend to assume that if an entity is a moral agent it must have a particular set of characteristics including consciousness, freewill, intentions and the ability to experience emotion. But is this really true? What characteristics do moral agents actually need, and is it possible that AI systems could acquire them?
Autonomy
In November 2021, UNESCO adopted a Recommendation on the Ethics of Artificial Intelligence. The recommendation assumes that AI systems can develop an understanding of the world and of themselves, and learn to make decisions. UNESCO insists, however, that AI systems should work for the good of humanity. They must never be allowed to harm any human being or human community, or act in ways that violate anyone’s fundamental rights, freedoms or dignity. All irreversible decisions should be reserved to human beings, notably decisions relating to life or death, and it must always be clear who has ethical and legal responsibility for each AI system.
So UNESCO wants the autonomy of AI systems to be very limited. However, once AI systems in android form start moving around and interacting with people, it will in practice be very difficult to limit their autonomy in the ways which UNESCO recommends. Androids will inevitably take decisions which have important consequences for humans, and we will want these decisions to be ethical ones. Will we really be willing to see an android stand idly by when an adult is harming a child? Surely we will want the android to harm the adult if that is the only way to protect the child. In fact, most of us will probably only accept the presence of androids in our communities if their decisions are generally ones which we consider to be morally right. Given the wide variety of situations in which they will find themselves, and the complexity of moral decision-making, androids will need to have considerable autonomy.
Responsibility
Recently, in the United States, a woman was driving on the highway with her small child and a loaded gun on the back seat of the car. The child picked up the gun and fired a shot which wounded the mother. Fortunately, the mother was able to bring the car safely to a halt beside the kerb and a greater tragedy was avoided. Now, surely none of us would hold the child responsible for this incident. It was obviously the mother’s fault. But consider this. The mother was herself once a child and now she is a responsible adult. While she was growing up she acquired an understanding of the world and how it works, and it seems that it was largely this new understanding that turned her from a non-responsible child into a responsible adult. So if androids eventually achieve a similar level of understanding, we may then feel entitled to hold them responsible for their actions.
Moral Values
It is natural to assume that the decisions of moral agents must be guided by moral values, but how will it be possible to install such values in androids when we disagree so strongly among ourselves about which values are best? For example, many people are committed to an ethic based on justice, duty and community while others emphasise compassion, rights and autonomy. But why should all androids have the same values? Suppose you are buying a childcare android. If you have ‘conservative’ values you will not want an android with ‘liberal’ values in case it should teach these values to your child. Manufacturers will make and sell the androids which people want to buy, so not all androids will have the same values.
A further concern is that our moral decision-making is beset with problems to which we do not have clear answers. What should we do when moral rules conflict with each other? How do we decide whether we are justified in breaking a rule on a particular occasion? Is it right to do something which will benefit us today if doing it will cause harm in the future? How should we decide between possible courses of action when each alternative will cause harm to at least one person? The fact is, our moral decisions often seem arbitrary, and it is not at all clear how we manage to take any moral decisions at all. How then will it ever be possible to teach ethics to androids?
On the other hand, it seems likely that androids will not be as burdened by moral considerations as we are. Human life is essentially about satisfying needs and desires, and morality is largely about how we should balance our own needs and desires against the needs and desires of others. But androids will probably have very few needs, and perhaps no desires at all. In that case their moral decision-making will be much simpler than ours. Provided that they behave as if they care about the well-being of both humans and other androids we will probably be happy to accept them as moral agents.
Teaching ethics to androids
So will it be possible to teach ethics to androids? A ‘top-down’ approach in which we try to install ethical theory and behaviour in androids by programming seems unlikely to work well, precisely because our understanding of our own ethical behaviour is very limited. A better possibility might be to mimic the way in which children appear to learn morality – by exploring courses of action and being rewarded for desirable behaviour. This method looks promising because it plays to the strengths of AI systems. For example, these systems can learn to distinguish photos containing cats from those which don’t, even though they have no understanding at all of what a cat is. So it seems that androids might be able to learn desirable behaviour even if they have no understanding of morality.
In essence, we teach morality to children by giving them feedback. We praise some of the things they do and criticise others, so that they learn what is acceptable and what is not. ‘Good’ behaviour is thus defined as behaviour which conforms to the adult standard. Of course, children learn morality in the real world, but they are small, weak and easily controlled. It would be dangerous to try this with androids who will be strong, agile, and capable of doing a lot of damage. They could, however, learn ethics safely in a virtual world. Chess-playing programs can learn very quickly by playing against themselves. Androids interacting with each other in a world like Second Life, an online multimedia platform which allows users to create an avatar for themselves and interact with other users, might also learn very fast.
Androids could become moral agents without being self-aware, and without having moral characteristics such as a conscience, intentions, or the ability to experience emotions like guilt and shame. They would not even have to understand the significance of moral values. All we need from them is that they behave in the real world in ways that we consider to be moral. They will probably be able to learn that much at least, and it will be enough to satisfy us.
Comments