top of page

Artificial Intelligence - should we be worried?

By Maggie Hall, former Chair of Brighton Humanists

Maggie Hall looks at a few pros and cons regarding AI and wonders to what extent we should be wary.

"The development of full artificial intelligence could spell the end of the human race." Stephen Hawking

A fragment of the Antikythera mechanism (Creative Commons)

The idea of 'a machine that thinks' dates to as far back as ancient Greece. In 1900, an unremarkable lump of bronze was recovered from an ancient wreck in the Mediterranean, near a little Island called Antikythera. It turned out to be an ancient Greek astronomical calculating machine, now known as the Antikythera Mechanism, and it completely changed historical thinking about the technology of ancient Greek civilisation. The level of complexity of this mechanism revealed technical abilities far exceeding anything that had previously even been suspected of that civilisation. The mechanism is probably the earliest example we have of an analogue computer and it represents a very early step in computer technology, which might now be seen as culminating in the present rapid developments in the field of Artificial intelligence.

However, new technology always carries with it an element of fear of the unknown. Many people were scared of the new-fangled telephone, thinking they might receive a shock from its wiring, that it might explode, that other people were listening in to their conversations, or that if they stood near one in a thunderstorm they might get hit by lightning. Similarly, in the early days of passenger train travel the motion of the train was thought to 'injure the brain' and induce madness in some people, or that travelling at 50 miles per hour would cause women’s uteruses to fly out of their bodies. When public radio broadcasts were introduced, even Marconi himself was a little worried, as when he had invented his 'wireless technology' he had intended it only to improve communication between ships at sea. There was a fear that people would cease to read or have meaningful conversations, a fear that was shortly to become attached to television and the internet. Today, some people might think that these fears were not entirely unfounded. Technophobia still lives among us. There are people I know who refuse to own a computer, won’t use online banking, and who view the phasing out of cash transactions as part of some sort of insidious conspiracy to control us all. I have to confess that I myself have never yet used an automatic checkout. I just want to deal with a fellow human being, not a machine, or perhaps it boils down to a fear of being guilty of having an unexpected item in the bagging area.

‘I have exposure to the very cutting-edge AI, and I think people should be really concerned about it.’ Elon Musk
Japanese robot ASIMO at Expo 2005 (Creative Commons)

Science fiction is populated with out-of-control robots and computers that get above themselves, like Hal in 2001: A Space Odyssey. Indeed, even the leaders in science and technology have expressed concern. Elon Musk told attendees at a meeting of the National Governors Association in 2017, ‘I have exposure to the very cutting-edge AI, and I think people should be really concerned about it.’ And the late Stephen Hawking told the BBC in 2014 that ‘the development of full artificial intelligence could spell the end of the human race.’ Personally, I worry less about rogue robots careering murderously down the high street than I do about what some future version of Hitler or Pol Pot might do with them.

The potential benefits of AI are undeniable, not least in the field of health care. Some recent studies have conclusively demonstrated how AI can lead to an improvement in brain imaging to predict the earliest stages of Alzheimer’s disease. This is significant because the earlier treatment is begun the better the prognosis for the patient. In another example, enhanced image-recognition algorithms have been found to be useful in the diagnosis of skin cancers. Lives could also be saved on our roads by the use of self-drive cars, since most fatal accidents are caused by human error. Clearly, though, there are ethical concerns here. For example, who should self-driving cars protect in case of an accident?

One problem that has been exacerbated by artificial intelligence is cheating by students. Plagiarism has been a problem since the early days of the internet and before, but now there are AI programs like Wolfram Alpha, which uses artificial intelligence to perfectly and untraceably solve equations. New York City public schools have blocked access to the popular artificial intelligence tool ChatGPT because of concerns that students could use this technology to write papers. This Guardian op-ed piece highlights the problem. Even Humanistically Speaking is not immune. One of our guest contributors this month is ChatGPT - although co-author Daniel Dancey has helpfully explained when ChatGPT is speaking and when he is. ChatGPT seems pretty good at writing so is it only a matter of time before the Editor sacks all the human writers?

Robots have long been used in industries like car manufacture, but the technology seemed to have over-reached itself somewhat when a Japanese hotel tried to use them as staff. In-room virtual assistance robots, which were supposed to provide personal assistance to guests, ended up being hacked and turned into ‘peeping toms’. Sometimes they responded to guests snoring and often there was a problem with voice recognition as they couldn’t always cope with a variety of accents. The reception robots sometimes failed to understand the names of guests. Luggage carrying robots couldn’t carry luggage. The robots broke down so often that human staff had to work overtime to keep them repaired. The hotel ended up dispensing with at least half of the robots and replacing them with good old fashioned human beings.

In 2018 Amazon scrapped an AI recruiting tool because it showed bias against women. Their computer models were based on previous applications submitted over a 10-year period and most of those had come from male applicants, reflecting male dominance in the general tech industry. This resulted in the bias against female applicants.

Any Facebook user will know about the difficulties arising from its algorithms wrongly identifying posts as violating its community standards. An animated video explaining how to conduct a breast examination was taken down with the explanation, 'Your ad cannot market sex products or services nor adult products or services’. The historical image of a naked girl fleeing from a napalm attack in the Vietnam War was censored by Facebook because of her nudity. I’m still trying to work out what was wrong with my post of a photo of a real cat on which I commented that it looked a bit like Bagpuss. It was just a cat!

One thing that troubles me about AI is that it rather caters to conspiracy theorists who think that every advance in technology is a sneaky way of controlling us all. Another is that if we keep letting machines do more and more of our thinking are we in danger of forgetting to think for ourselves? Those of a more pessimistic nature might think that this has already been happening for some time. Then there is always the fear that robots might end up being more intelligent than their creators. As my fellow trekkie and colleague on Humanistically Speaking Aaron will undoubtedly know, my favourite Star Trek character, Mr Spock, once said, ‘Superior ability breeds superior ambition.’ He was referring to eugenics, but perhaps an eye should also be kept on AI.

Trains were once considered hazardous to one's health. / Wikimedia Commons // Public Domain

52 views0 comments


bottom of page