
By Mike Flood
Chair of Milton Keynes Humanists and the Future of Humanism Group
In this article, Mike asks what impact AI might have on humanism and humanist thinking, in particular on human rights and privacy, and he calls for humanists to step up and make our voices heard on the ethical issues arising from this rapidly-developing technology.
“Despite its name, there is nothing ‘artificial’ about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.” Dr Fei-Fei Li, Co-Director of Stanford University's Human-Centered AI Institute
It has been said that the development of AI poses the greatest threat to faith-based thinkers since the publication of Darwin’s On the Origin of Species. But what impact might it have on humanism and humanist thinking, and in particular on our human rights and privacy, and what it means to be human?
Some years ago The World Economic Forum identified a number of issues that “keep AI experts up at night”. They include:
How do we distribute the wealth created by machines?
How do machines affect our behaviour and interaction?
How can we guard against mistakes?
How do we eliminate AI bias?
How do we keep AI safe from adversaries?
How do we protect against unintended consequences?
How do we stay in control of a complex intelligent system? and
How do we define the humane treatment of AI as systems become more complex and life-like?
This is just one of many attempts people have made to explore the diverse range of ethical issues associated with the development of AI. The late Chief Rabbi, Lord Sacks, spoke of this as one of 'the most pressing moral issues of our time', and Reith Lecturer Stuart Russell, called it 'the most profound change in human history'. Bishop Steven Croft expressed his concerns in Christian Today, when he noted that 'every development in Artificial Intelligence raises new questions about what it means to be human... Christians need to be part of that dialogue, aware of what is happening and making a contribution for the sake of the common good.' Bishop Croft sits on the House of Lord’s Artificial Intelligence Committee which, in 2018, published a much acclaimed report on AI. Among other things, the report argues that AI 'should be developed for the common good and benefit of humanity'; it should 'operate on principles of intelligibility and fairness', and should 'not be used to diminish the data rights or privacy of individuals, families or communities.' It concluded that 'Autonomous power to hurt, destroy or deceive human beings should never be vested in AI.'
Advances in AI that are being predicted for 2023 include robots 'develop[ing] the ability to converse, entertain, and even provide companionship to their owners, engaging in natural conversation and becoming an integral piece of the home' and 'artificial intelligence finally emerg[ing] as an essential and everyday tool for scientists across domains and disciplines... Just as millions of office workers today rely on email and word processors, scientists will begin to rely on machine-learning models and AI systems in the same way', with some things becoming 'as effortless as a Google search'. There are also reports that Microsoft is planning to integrate OpenAI’s ChatGPT, described as a sort of 'autocomplete on steroids', into Word. Indeed, one observer has suggested that the speed at which AI tools are passing from advanced research to everyday products 'may be unparalleled in tech history'.
So how might these advances impact on humanist services and campaigns, such as pastoral care and assisted dying, or the quest to live ‘happier, more confident, and more ethical lives’? And how might AI enthusiasts’ cavalier use of terms like ‘humanistic AI’ (see my companion article in this edition of Humanistically Speaking) affect our brand and messaging? This requires careful thought. For example, there’s much talk about human-friendly robots providing protection and a respectful, chatty and uncomplaining companion for the frail and elderly. But how do you ensure that a robot won’t one day persuade a vulnerable person that he or she is a ‘nuisance’ and ’in the way’? Suicide has already been suggested to one punter by GPT-3 (ChatGPT’s predecessor); and Amazon’s Alexa is reported to have encouraged a child to put a penny in an electrical outlet. Moreover, as AI gets smarter and smarter, 'it will be easier to trick people — especially children and the elderly — into thinking the relationship is reciprocal.'
We should also be concerned about the threat to human rights and privacy. For example, did Google cross an ethical line when it demonstrated a device that could chat on the phone so naturally that people believed they were speaking to a human operator? And what about the use of Facial Recognition Technology, which President Xi has deployed so effectively in China with his Orwellian social credit system? This must be a strong contender for one of AI's most poisonous fruits — along with lethal autonomous weapons which are, right now, being deployed in Ukraine.
Other intriguing Janus-like AI ‘fruits’ include attention-seeking ‘black box’ algorithms that have so successfully facilitated the spread of lies and misinformation on social media, and deepfakes, which challenge our ability to know what is real and true. And let’s not forget cheating as well as serious criminality involving AI. Teachers are already struggling to work out how to discourage and prevent students from using ChatGPT to cheat in assignments, and reports are beginning to emerge of the software being used to write malicious code for ransomware and other evils. Such developments have led to growing calls for AI developers to be required by law to conduct Life Cycle Human Rights Impact Assessments of their products before they are released, and for social media bosses to be held to account for what appears on their platforms. And is it too much to ask that companies that develop and promote increasingly life-like AI goods and services should be required by law to see that their creations take account of basic humanist values such as tolerance, understanding and compassion?
In our Future of Humanism Manifesto we have suggested that humanist organisations make contact with groups working on or concerned about AI to explore the possibility of partnership. Here in the UK there’s no shortage of possible suitors: we might start with the Forum for Ethical AI (DeepMind/RSA), the Chatham House Digital Society Initiative, and the University of Oxford Institute for Ethics in AI which is running programmes on ‘AI & Human Rights’ and ‘AI & Human Well-Being’. It’s clearly time for humanists to be making our voice heard on these important issues.
Read the Future of Humanism Manifesto here
References
How to Make A.I. That’s Good for People by Dr Fei-Fei Li (New York Times – limited access)
Is AI a Threat to Christianity? Are you there, God? It’s I, robot. By Jonathan Merritt
World Economic Forum Top 9 ethical issues in artificial intelligence
Ethical issues: See, for example, the ‘Asilomar AI Principles’; the proposal that AI practitioners take an oath analogous to the Hippocratic Oath; and the European Commission’s Ethics Guidelines for Trustworthy AI.
BBC Radio Four to air Rabbi Lord Sacks' series on morality
Stuart Russell Reith Lectures
Bishop Steven Croft 10 Commandments on AI
House of Lords Report on AI
Predictions for 2023: quotations are from Amazon’s Ken Washington and DeepMind’s Pushmeet Kohli
Microsoft Plans To Add ChatGPT To Office 365 report
Microsoft’s $10bn bet on ChatGPT Irish Times
Humanists UK assisted dying
Humanists UK: The quest to live ‘happier, more confident, and more ethical lives’
UnHerd human-friendly robots
Suicide suggested by GPT-3
Alexa reported to have encouraged a child to put a penny in an electrical outlet
As AI gets smarter and smarter (companion robots)
Google device that can chat on the phone
Chinese social credit system
lethal autonomous weapons being deployed in Ukraine
Attention-seeking ‘black box’ algorithms and deepfakes
Using ChatGPT to cheat
Malicious code and malicious use of AI
Economist: AI must explain itself
Comments