AI is changing what it means to be human: should we be worried?
- Mike Flood

- Oct 31
- 11 min read

By Mike Flood
As AI reshapes every aspect of modern life, what does it mean to be human in a world where machines can think, create, and even feel? Mike Flood reflects on the ethical and philosophical implications of Artificial Intelligence – and what the humanist response should be. Mike is Chair of Milton Keynes Humanists and Humanism for the Common Good. He is writing here in a personal capacity.
‘We call for a prohibition on the development of superintelligence, not lifted before there is: 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.’ Statement on Superintelligence, Future of Life Institute, a global non-profit organisation.
Ten years ago, a computer program developed by DeepMind became the first to beat a professional player in the ancient Chinese game of Go – a profoundly complex board game of strategy, creativity and ingenuity. Seven years later, OpenAI’s ChatGPT broke all records by gaining over 100 million users within just two months of launch in November 2022. Today, ChatGPT has over 700 million active weekly users and is said to be among the top five most-visited websites. These are just two of many signs that Artificial Intelligence has arrived and is having a profound impact on what we think and do, not least concerning what it means to be ‘human’.
I want to focus on this aspect of the AI Revolution and its implications for humanism and other belief systems (see note 1), and in particular the way in which AI is beginning to influence our understanding of what we mean by ‘intelligence’, ‘creativity’ and ‘identify’ in a world where non-human entities can replicate and even surpass many of the abilities we humans have evolved. As the new Humanist Declaration on AI points out, we are today standing at 'a unique moment in human history': if we handle AI carelessly, it could pose 'profound risks' to our freedoms, security, and collective future.
But let me start with an interesting piece of research...
Human attributes
The extraordinary achievement of DeepMind’s AlphaGo led researchers at Stanford Graduate School of Business to start asking some rather profound questions about what the development of AI could mean for our sense of what it means to be human. And to explore this further, they drew up a list of 20 human attributes, half of which we currently share with AI, and half they felt to be distinctive to humans (see box).

The team was curious to find out if people felt that their sense of uniqueness might be threatened when AI systems were presented as having human-like traits, and whether they would try to distinguish themselves from their new ‘rivals’ by revising their thinking on what it means to be ‘human’. As the leader of the team, Professor Benoît Monin, pointed out: ‘Humanity has always seen itself as unique in the universe… When the contrast was to animals, we pointed to our use of language, reason and logic as defining traits. So what happens when the phone in your pocket is suddenly better than you at these things?’ Good question!
In the report on their findings they note that AI ‘isn’t exactly like an invading tribe with foreign manners – after all, we created it to be like us’ (see note 2). But they concluded that 'the cognitive skills and ingenuity that made AI possible are now the very ground on which machines are surpassing us.' And they go on to speculate that this may lead us to put more value on other traits, notably skills such as warmth and empathy, and the ability to nurture growth in others.
Intelligence
Our traditional notion of intelligence is that it is concerned with specific human-centric traits such as reason, self-awareness, creative thinking and emotional insight. But then what should we make of AI’s ability to rapidly process, adapt and make decisions, skills that are not uniquely human? Indeed, AI already outperforms humans in a wide range of tasks, from playing chess (or Go), to analysing large datasets or diagnosing diseases. It may struggle with generalisation (which humans are good at), but it can process vast amounts of data at speed and in ways that humans don’t come close to. So should we not be modifying our ideas of intelligence?
AI may not have emotions, but systems are already being designed to recognize and respond to human reactions and emotional cues and tailor responses accordingly, for example in customer service and mental health applications. AI clearly does not possess something akin to self-awareness, but it may in time be possible to create machines that appear to be sentient. If this turns out to be possible, there may well be consequences (see note 3).

And what about brain-machine interfaces: could this technology also shift our understanding of intelligence from something that resides solely in the individual to something that can be augmented and enhanced by AI? And moral reasoning (not something we associate with AI)? Indeed, this presents major challenges for those working in areas like autonomous vehicle safety (who does the car prioritise in an accident – the passengers or the hapless pedestrian?), or the use of AI in law enforcement (how can we avoid bias in predictive policing?): not to mention autonomous weapons systems. As AI systems become more autonomous, might we find ourselves asking whether taking certain actions without moral reasoning is also a form of intelligence?
Of course, the ‘elephant in the room’ is the prospect of Artificial General Intelligence (so-called ‘superintelligence’), a type of intellect that exceeds human cognitive capabilities in all areas. If AI were to develop such capabilities, we could face a future where machines start proposing (dictating?) solutions, for example to the existential threats posed by climate change or the emergence of a new and deadly virus. Whether we are mentally prepared for such a world is another matter. The consequences could be existential (see note 4).
Creativity
AI tools are today also being used to augment or enhance the creative process: they can assist artists, musicians, writers and coders in brainstorming, generating drafts, code, refining concepts, and generally offering suggestions that might not occur to humans. For example, a program such as GPT-3 can write novels, poetry and lyrics; and the likes of DALL-E and MidJourney can generate stunning images (including lifelike deepfakes – see note 5), while Google's Magenta and OpenAI’s MuseNet can create music infused with intricate melodies and harmonies. MuseNet will, on command, effortlessly combine styles from country to Mozart or The Beatles. This is extraordinary. But I fear that we will soon be taking such achievements for granted, as we do today with so much modern technology, the internet, smartphones, touch screens, satellite navigation and GPS – you name it!
What these programs create may not be ‘original’ in the human sense of drawing from personal experience or emotion. However, the blooming of the technology does raise interesting questions about whether creativity can be encoded into algorithms, and whether such works should be considered ‘true’ art and have the same artistic ‘value’ as works created as a result of the sweat and toil of human artists. At the end of the day, if an audience enjoys an artwork or musical composition and connects with it emotionally, does it really matter if it is created by a computer program? Rory Stewart for one thinks it does in that it diminishes us when AI can 'effortlessly write a poem or play better than we can'.
And then there’s the question of how to apportion credit when an artist generates a painting, poem or novel with the help of AI: who deserves the recognition (apart from the artist)? What about the programmer who designed the algorithm, or the AI model itself? The fact that AI is trained on vast datasets, which often include works by human creators, also raises ethical concerns about whether AI is ‘stealing’ or merely remixing the intellectual property of others. I might add that it is not uncommon for artists to ‘borrow’ ideas from others (as we like to say, ‘imitation is the sincerest form of flattery’).
Before moving on, I should flag up growing concerns about the ability of chatbots, AI assistants and the like to reduce our capacity to think critically and be creative – i.e. active mental engagement rather than what has been called ‘cognitive offloading’. Many of us are today using chatbots to draft letters, write essays, and solve all manner of problems, and AI does the job in a matter of seconds rather than minutes or hours (and generally, far better). I used ChatGPT to help inform myself on this interesting topic; but you wouldn’t thank me if I had not checked what it came up with – and done rather a lot of work on it! But what happens when we shift to an over-reliance on such tools, and fail to think critically about what we’re doing? Some people are already asking if AI 'is making us dumber', with our 'ability to think critically' and 'use language creatively' slipping away as we 'sit back and let AI do the dirty work for us'.
One writer asked recently whether 'in the ever-expanding, frictionless online world… we are living in a golden age of stupidity'; and, in the dawning era of AI-generated misinformation and deepfakes, went on to question whether we will be able to maintain the scepticism and intellectual independence that we will need. 'By the time we agree that our minds are no longer our own, that we simply cannot think clearly without tech assistance, how much of us will be left to resist?' she asked. She also recalled that, last year, ‘brain rot’ was the Oxford University Press Word of the Year.
Identity
Identity is the set of qualities, beliefs, personality traits and characteristics that define an individual or group and help shape our sense of self and how we relate to one another. It encompasses values, personal experiences, social roles and cultural background, and can evolve over time. But what happens when an AI model can mimic these characteristics and features with uncanny accuracy – and respond to our emotional needs, predict our behaviour, and shape our preferences? What are we to make of this?
Indeed, do we truly have free will when AI algorithms routinely shape what we see or receive from our social media and other feeds? Such systems are deliberately designed to understand what we like and what we do online, and they influence our thinking, choices and behaviour accordingly, often in subtle ways. And if AI can predict what we want to buy, what we want to watch, and how we behave, ‘how much of who I think I am is truly me?’
And let’s not forget AI prosthetics – people having technology integrated into their anatomy, bionic arms, brain-machine interfaces and the like, enabling the recipient to enhance their cognitive functioning and replace or enhance their physical abilities. It is interesting to speculate whether, over time, this may cause our sense of self to shift, making it increasingly hard to distinguish a clear line between organic and artificial (inorganic) identity.
Of course, an important influence on who we are is the nature of the relationships we have with others, and much of what we consider ‘real’ about ourselves comes from these social interactions. But if we increasingly interact with chatbots, virtual assistants and AI companions, might this not threaten to undermine our sense of companionship as social beings and our ability to connect with others? Look at how smartphones and social media adversely affect young people today (see note 6). Indeed, can a machine-generated friendship ever match friendship based on shared human contact, experience and understanding? And what about romantic AI partners – or using chatbots to ‘talk with God’ or the deceased (see note 7)? This technology may offer comfort to some, but it also challenges our thoughts on the meaning of relationships and intimacy, and the finality of life – which is clearly a consideration for humanists, with our belief in making the most of the one life we have!
Meaning
For most people, the things that give life meaning have traditionally been thought of as engaging in purposeful, productive activity, acquiring knowledge and wisdom, and exercising moral agency; one wonders how the development of AI is going to affect our feelings about these things. We may find that we derive meaning less from controlling the world around us and more from what we contribute to others and to society, and perhaps from coexisting ethically with these new forms of ‘intelligence’. The meaning of life may evolve from what we do to how we are; how we learn to best present ourselves to the world remains to be seen. But we will definitely need to recalibrate human values in terms of emotional intelligence, empathy, and other ‘soft’ skills that machines can’t easily replicate, and perhaps even learn to wonder more about our world and how we have been changing it, for better and for worse.
With respect to this latter point, I can’t close without mentioning the bigger picture: I have focused on the way AI is beginning to influence our understanding of what it means to be human. I have not attempted to talk about the benefits of AI – or indeed the threat that its unregulated development poses to the world. Just before he died, Daniel Dennett, who made such significant contributions to the philosophy of mind and our understanding of consciousness, wrote passionately about ‘the problem with counterfeit people’, arguing that government should be outlawing fake humans ‘as decisively as they have previously outlawed fake money’. Another great mind, Yuval Harari, goes further: he thinks that AI has already 'hacked the operating system of human civilisation' and argues that democracies should 'ban unsupervised algorithms from curating key public debates'. Harari says that if he’s having a conversation with someone, and he cannot tell whether it is a human or an AI, ‘that’s the end of democracy’. That may be a little over the top, but it is definitely something for humanists to be thinking more about – and hopefully acting on.
Mike makes no secret of the fact that he would like to see humanists thinking more about our future and tackling the growing threats to humanist values posed by unregulated AI, misinformation and climate change – and putting rather less effort into tackling religious privilege (see note 8). As the Luxembourg Declaration says, this is ‘a unique moment in human history’, and what we do and how we spend our precious time could not be more important.
Notes
Religious leaders have been engaged in discussions about AI for some time, focusing on its ethical implications and the need for faith communities to actively participate in these conversations to ensure technology serves humanity positively. The Church of England’s interest in the topic goes back to at least 2016; and Catholic Popes have also spoken out. Check out: aiandfaith.org.
The inspiration for neural networks that enable AI systems to recognise patterns and solve problems were inspired by the architecture of the human brain, an organ that has been described as 'the most complex structure in the known universe'. The brain contains around 86 billion neurons, 85 billion other cells, and over 100 trillion connections, facilitating communication and coordination of bodily functions.
There is an ongoing debate as to whether AI will someday necessitate rights similar to those that humans have. Some believe that reassessing AI's position in society will be necessary, especially if the technology takes control of its own actions; others disagree and can’t see AI ever being treated as ‘human’ as this would seriously devalue our humanity. The question of whether AI is really ‘intelligent’ (or for that matter ‘artificial’) is another story…
There is a famous thought experiment that illustrates the potential dangers of superintelligence: it is referred to as the ‘paperclip problem’ or 'maximiser', and it suggests that, to achieve its goal, an AI programmed to maximise paperclip production could consume all resources including those needed for human survival, highlighting the importance of ‘being careful what you ask for’ and taking care always to align AI objectives with human values.
People may remember some of the stunning images of Donald Trump ‘being arrested’ which were generated on MidJourney in early 2023 and went viral on the internet …
See e.g. Jonathan Haidt’s 2024 book ‘The Anxious Generation’.
See my piece ‘Ethical dilemmas posed by chatbots and avatars’ in the Dec 2024 issue of Humanistically Speaking.
Religion doesn’t feature in the Luxembourg Declaration on Artificial Intelligence and Human Values, or indeed in the modern versions of virtually all of the other important humanist declarations. It does appears in the Declaration of Modern Humanism, but then only once and in passing.




Comments