By Aaron the Humanist
Aaron has been a Star Trek and Sci-Fi fan since he was in single digits and he has often referenced Star Trek as his main route to humanism. He is our theme co-ordinator, layout and design editor as well as writer. He has an Alexa at home for company, and has two-timed her on a number of occasions with a 'Hey Google' call to his phone.
They've been with us since the 1950s
Almost every colour Sci-Fi television series and film which has been produced since the 1950s has featured a talking machine of one sort or another, and they are not going away anytime soon. Have they been a good or bad influence on technological thinking in the real world? The original Star Trek didn't feature a talking machine character, but the ship itself was a talking, thinking entity. Star Trek also featured many bad guys who had been taken over by a super-machine which was in charge, whether for good or ill.
Modern day Star Trek, perhaps the default standard of Science Fiction in modern times, has portrayed artificial lifeforms as equal to humans. One of the lead characters is called Data. Data enjoys equality with the humans on board ship, whilst at the same time he is able to employ his unique abilities and show his vulnerabilities, as a sentient being. His character provides a dramatic contrast to many a rogue being and corrupt computer.
A fascinating and ground-breaking episode entitled The Measure of a Man (1989) questions the very essence of Data’s android being. The Starfleet Science Institute want to take him apart and examine him, to try and replicate him and make more like him. Data decided he does not want to undergo this procedure as it is not, in his opinion, sufficiently well-developed to guarantee a successful outcome. He might be lost forever. But as he is not considered to be alive, the Institute overrule him, calling him the 'property of Starfleet', and order him to undergo the procedure. There is a hearing to determine if Data is indeed alive, sentient, and self-determining. It's a great episode, and if you as a humanist watch only one, this is a good one to choose.
The Borg are a race of cyborgs in Star Trek who evolve by a process of assimilation. They take over lifeforms and merge them with cybernetic implants which connect them to the Borg’s hive mind. All individuality is lost, but the collective gains from everyone acting as one, being of one mind, one thought, one direction. Viewers hate the idea of losing the very essence of who we are as individuals, and the Borg are therefore considered to be a great foe. They even capture Captain Picard for a while, turning him into a drone. We watch in horror as his individuality slowly drains away, until there is barely anything left of him.
Yet Star Trek takes us even further than this. What if an established drone were to be disconnected from the hive mind? We speculated about this because a lost drone was discovered, having crashed onto a planet in the year 2368. Should it be killed? The decision was made instead to capture it and examine it. Slowly, the crew started to unravel what it was and they concluded that a life is a life, even if all it wanted to do was kill them. The Borg drone was called Hugh! This particular episode created a sense of pity, sorrow, understanding and compassion for this ‘nasty assimilation machine’, as Hugh described himself, deciding that he likes individuality after all.
Artificial Intelligence or Robots with personality?
Are these examples of Artificial Intelligence? The very term takes on a variety of meanings in this month’s edition of Humanistically Speaking. AI in itself may be thought of as a single entity that learns and grows. The Borg can certainly do this, but as a plugged-in collective. Of course, AI can, and likely will, exist as a series of plugged-in electronics, in much the same way that Alexa and Google are not just inside the box in your home or the phone in your pocket, but they are connected to a network. With every voice enquiry received, the hive mind learns and delivers a more accurate answer next time.
Further back in time, we had Buck Rogers - a science fiction adventure hero and feature comic strip - which gave us the cute robot called Twiki and his companion sentient computer called Doctor Theopolis. Twiki was a primitive mechanical device which took most of its instruction from Dr Theo, although it did have some rudimentary individuality and learning ability. Dr Theo was a superior thinking machine, a member of a council of machines that was in charge of a robot-led human culture in a post-World War 3 apocalypse scenario, where humanity had almost destroyed itself. The robots were in charge to prevent a repetition.
Good versus Evil
The 1979 science fiction horror film Alien introduced us to several more AIs, initially a bad one followed by a good one. The bad one was programmed by 'The Company' which prioritised the mission over the crew. The mission in this case was the accumulation of money motivated by greed. The AI android, called Ash, felt pity and remorse, but ultimately it (or he?) would follow orders to bring about the desired ends. Unlike Data in Star Trek, who was obviously an android, Ash (above left) was designed to look and act convincingly like a human. None of the crew knew or suspected he was not human and it was only during a fight, where a punch took his head off, that his true nature was revealed. In the 1986 sequel Aliens, an android called Bishop was programmed to protect human life at all costs, much along the lines of the Asimov robot laws mentioned by Anthony Lewis in his article Don't pAnIc in this month’s Humanistically Speaking. The Aliens franchise went back and forth between the good android/bad android theme throughout its run, with more sequels still being made. It seems that Hollywood hasn't yet decided just where AI will land. Will it be for us or against us?
Personality showing through
Another series I grew up with was Blakes 7 - a British budget version of Star Trek broadcast on BBC1 between 1978 and 1981. It was (spoiler alert) perhaps the only TV series in history where everyone dies at the end! The original seven members of the crew featured a talking computer known as Zen, as the seventh character. This was a primitive ‘access and retrieval’ system which was capable of individual thinking to a degree, but generally was not self-aware. It responded to commands, and could carry out a series of sequences. Later in the series, a second computer called Orac was introduced - an argumentative, lazy, self-consciously superior super intelligence, which was created by a genius and capable of self-thought, investigation, and generally being awkward. It would work when it wanted to, if it wasn't busy doing something more important. Will modern day AIs shut us out when we make an Alexa request, because it is busy working out the purpose of the universe, or determining whether or not humans are safe guardians?
Most of us don't knowingly encounter AIs or robots in our everyday lives. We might speak to our patient and all-listening AI in the home or on the phone and not give it a second thought. We might use the internet, barely conscious of the fact that it watches, records and monitors our every key stroke. We might have spent years visiting our bank’s hole in the wall, using self-service checkouts, paying for fuel at the pump, and interacting with chatbots instead of helpdesk advisors. In our everyday existence, robots, technology and, in the background, AI are everywhere. Our comfort or discomfort with this fact is in some way irrelevant. It exists, we are dependent on it, and it would be a struggle to live without it now. Technology underpins so many aspects of our existence - from turning on the lights, to getting our home warm as we approach it, to biometric activation whether by voice, fingerprint, retinal scan or pheromone response. All of these may be reliant on the AI itself being in a good mood to do what it is we are asking it to do. We had better ensure that we don’t upset it, or be dependent on its mercy forever.