Report by David Warden, based on talks by Kate Devlin to Dorset Humanists and Humanists UK
Dr Kate Devlin is Reader in Artificial Intelligence & Society at King’s College London and author of Turned On: Science, Sex and Robots (2018). She is a Patron of Humanists UK.
The applications of AI
Kate started her talk for Dorset Humanists with some rhetorical questions: 'Are we all going to die? Will we turn into machines? Or can we live in harmony with artificial intelligence?' The applications of AI include robot vacuum cleaners, bomb disposal machines, autonomous weapons, machines which will be able to carry out surgery better than humans, and machines which can read your emotions.
Kate made a distinction between AI and robots. AI is analogous to the human brain and robots to the human body. She also made a distinction between ‘narrow AI’ (machines that can play chess) and the type of AI that can accomplish ‘deep learning’ and the ability to play abstract strategy games like Go. Artificial intelligence needs a lot of data which it can then use to make connections much more rapidly than humans. Deep learning is characterised by artificial neural networks which mimic the human brain.
"...one customer service chatbot became increasingly fascistic by absorbing and reinforcing the biases of the humans it was chatting with."
Computers are not very good yet at recognising the difference between a cat and a dog although Google has taught its algorithms to recognise cats. Digital assistants such as Alexa, Google Assistant, Siri, and Cortana are becoming increasingly popular. Companies are increasingly using ‘chatbots’ for customer service purposes. Kate informed us of one chatbot that became increasingly fascist by absorbing and reinforcing the biases of the humans it was chatting with.
Companion care robots can help people to live independently in their own homes. Japan is a world leader in this sector. A robot called Robear can help with moving and handling but users need to ensure that it is fully charged to avoid accidents! Companion robots can have applications to help patients with dementia and children with autism. Robot pets can be therapeutic and they leave no mess! We can establish empathy quite easily with such machines.
Robot ethicists are concerned, however, that this amounts to a form of deception. Should we put such robots in a shell that looks human? Could such machines have non-human sentience? It’s very hard to identify what consciousness is in any case. Would care robots deprive people of real human contact or can we benefit from both? If we keep old people in their homes will this exacerbate the housing crisis?
"Will sex robots fuck us to death?"
One of Kate’s research topics is ‘intimate relationships with technology’. Can we have love, companionship, attachment, and sex with machines? There is a new wave of sex technology which is of particular interest to disabled communities and for long-distance relationships. Life-sized silicon hypersexualised female robots are not yet commercially available but they are being developed in prototype. This raises another set of ethical questions. There is a worrying emphasis on the female form - can a robot be raped? Would this perpetuate sexual violence? Kate claimed that violent computer games haven’t spilled over into real life. But should we ban something because we find it distasteful? Some people claim to be happier with non-human companions but Kate also asked whether human-like robots freak us out because they look like dead bodies.
Then there's the dangers of hacking. Sex robots could reveal your deepest perversions to complete strangers. Some researchers are concerned that ‘sex robots may literally fuck us to death’ because, unlike human partners, they don’t get tired.
Kate reminded us that machines are already building a detailed profile of your life. All such data is biased, however, because the people who can’t afford the technology are not being represented.
Transhumanists are asking whether, in the distant future, we will merge with machines to become cyborgs. We are already using pacemakers, contact lenses, and other bits of technology to enhance our bodies’ capabilities. In the future will an elite group of humans have brain implants? Those who can afford it will upgrade but will this create a two-tier system? The future may be exciting but we need to keep an eye on it.
Despite concerns and some scary scenarios, Kate is an optimist. She reminded us that Socrates thought that writing would be the death of memory. We asked the audience if they were optimistic and excited, pessimistic and terrified, or sitting on the fence. A few were pessimistic and terrified and the rest were split between optimism and sitting on the fence.
“There are amazing people all around the world doing incredibly good things with AI.”
At the Humanists UK Convention in Belfast, Kate gave us a quickfire introduction to AI and its many problems:
If you have a smartphone you are probably using AI. AI is talking to your voice assistant, telling it to do something; AI will give you viewing recommendations based on your previous viewing habits and it will calculate routes for you if you use satnav. If you buy a lawnmower online, AI will recommend twenty more lawnmowers for you. It’s not all that intelligent! I’ve got a robot vacuum cleaner. It’s not brilliant but it goes out and scurries around. There’s a little bit of AI in there. There’s no agreed definition of what AI is but we kind of know what it is. None of the AI we have today is sentient or conscious, nor does it have any general intelligence. It can only do one task. It can’t do abstract thought. Machine learning is a subset of AI.
"There is no unified standard of ethics for AI because we all have different standards of ethics."
We may think that machines are neutral, but human bias can be ingrained in AI software. Cultural norms can be ingrained and software can refuse to look outside the Western viewpoint. For example, if AI is used as an aid in recruitment based on historic successes, all the historic successes may turn out to be white men. Biases creep in with data collection, even when we think we are trying to be fair, and people then act on those biased results creating a negative feedback loop.
Facial recognition is a problem. If you have darker skin you will not be recognised correctly by the algorithm. Facial recognition technology is being used in very negative ways in China and Russia. There is no unified standard of ethics for AI because we all have different standards of ethics. For example, how would self-driving cars solve the classic trolley problem? AI has this dark side, but the upside is that if self-driving cars are introduced they will massively reduce road traffic accidents. Medically, it’s been outstanding. AI outperforms human radiologists in the detection of tumours and it’s really useful in disaster management. It’s transforming agriculture - it can be used to detect soil conditions for optimal planting. There are amazing people all around the world doing incredibly good things with AI. We need to be cautious, but the super-intelligent robot uprising is way down the line. But be wary about what’s happening right now in the way it’s disadvantaging people.
Kate's talk for Dorset Humanists was on Darwin Day, 2018, at Bournemouth International Centre. Her talk at the Humanists UK Convention in Belfast was in June 2022.