top of page

Conversations with a Computer


By Daniel Dancey, treasurer of Dorset Humanists and a software engineer


In this article, Daniel teams up with ChatGPT to explore some of the potential dangers of developing Artificial General Intelligence (AGI). He also asks ChatGPT to define humanism, provide a humanist view on immigration, create some digital art, and tell him whether his job is safe.


"It’s possible that the only way for humanity to survive Artificial General Intelligence is never to build one in the first place... if an AGI did decide to destroy humanity, then the methods it would choose to use would be beyond our comprehension." Daniel
"Developing AGI systems with a focus on ethics, safety, and alignment with human values is a crucial area of research in the field to ensure that AI systems do not cause harm." ChatGPT

Introduction

I have used the ChatGPT language model AI to help me with writing a series of articles. Although I have at times had to regenerate responses or rewrite small sections, the results were generally very good. Below, I show you the prompt used, followed by the response from ChatGPT, which I have reviewed to ensure that it's broadly accurate and consistent with my beliefs. Please bear in mind that although I have made it clear where I've used AI to co-write sections, and where it reads like a conversation between myself and the AI, it's entirely possible for online content providers to avoid making that distinction and to present AI written content as if it were created by a human.


My prompt to ChatGPT: Please write some text to introduce the author of a series of articles on the subject of artificial intelligence (AI). The author is named Daniel. Daniel works as a software engineer and has an interest in AI. The author gave a talk on the subject of AI in July for Dorset Humanists. The articles are for a newsletter called Humanistically Speaking and Daniel is using an AI to help write the articles.


ChatGPT response: Meet Daniel, a software engineer by profession and an AI enthusiast by passion. With a deep interest in the subject, Daniel has been exploring the depths of AI and sharing his knowledge with others. In July, he gave an insightful talk on AI for Dorset Humanists. Now, he is set to bring his expertise to a wider audience through a series of articles for the newsletter "Humanistically Speaking". In a unique twist, Daniel is using AI to help him write these articles, demonstrating the practical applications of this rapidly advancing technology.


All images created using Stable Defusion
Daniel recruits the AI for help with writing these articles

Artificial General Intelligence Overview

Written by Daniel: It's important to note that although ChatGPT can produce very convincing text that looks like it was written by a human, it is not itself an Artificial General Intelligence. It is incapable of genuine reasoning abilities, and is relying on a huge dataset of information written by humans. ChatGPT and similar language models do not have feelings, desires, goals or any form of consciousness.


Prompt: What is an artificial general intelligence and how does it differ from the kind of algorithms that exist today?


ChatGPT: Artificial General Intelligence (AGI) is a type of artificial intelligence that is capable of performing any intellectual task that a human can. Unlike current AI algorithms which are trained to perform specific tasks, AGI would have the ability to reason, understand natural language, learn, solve problems and make decisions like a human. Currently, AI algorithms are designed to perform specific tasks such as image recognition, natural language processing, or game-playing, and they lack the ability to generalize their knowledge to new tasks. AGI, on the other hand, would have a more comprehensive understanding of the world and the ability to apply that understanding to new situations. In short, the difference between AGI and current AI algorithms is that AGI is designed to be capable of performing any intellectual task that a human can, while current AI algorithms are limited to performing specific, pre-defined tasks.


Prompt: How might an AGI be created?


ChatGPT: The creation of an AGI is still a topic of research and speculation in the field of artificial intelligence. There is no agreed upon approach to building AGI, and different researchers have proposed various methods. One approach is to build AGI through end-to-end deep learning, where a neural network is trained to perform a wide range of tasks. Another approach is to build AGI through symbolic reasoning, where the AI system is designed to use a set of rules and symbols to perform tasks. Another approach is to create AGI through evolutionary algorithms, where AI systems are trained through a process of natural selection and survival of the fittest.

Additionally, some researchers propose combining multiple approaches, such as combining deep learning with symbolic reasoning or using hybrid systems that incorporate both biological and artificial components. It's important to note that creating AGI is still a largely unsolved problem and there is no guarantee that it will be possible to build an AGI in the future. Despite this, research in this field continues as the potential benefits of AGI, such as solving complex problems and enhancing human productivity, are substantial.


Prompt: Might an AGI pose a threat to humanity?


ChatGPT: The potential risks associated with AGI are a topic of debate and concern among researchers in the field of artificial intelligence. Some experts believe that an AGI could pose a significant threat to humanity if it were to be designed and deployed without proper safety measures in place. For example, an AGI with goals misaligned with human values, such as maximizing its own utility at the expense of human well-being, could cause harm to society. Similarly, an AGI that is designed to pursue a specific objective with high efficiency could lead to unintended consequences if the AGI's objectives are not well-defined or if its behavior is not properly constrained. Additionally, an AGI with superintelligence, or the ability to surpass human intelligence in multiple domains, could also pose a threat to humanity if it were to act in ways that are harmful to humans. It's important to note that these potential risks are still largely speculative and the field of AGI is still in its infancy. Nevertheless, it is crucial for researchers and policymakers to consider and address these risks as the field of AGI continues to advance.


Prompt: How would the creation of an AGI compare with some of humanity's other great achievements?


ChatGPT: The creation of an AGI would likely be one of the most significant achievements in human history, on a par with other major technological milestones such as the invention of the wheel, the development of agriculture, the industrial revolution, and the creation of the internet.

Like these other achievements, the creation of an AGI would have the potential to transform society in profound ways. It could lead to breakthroughs in fields such as medicine, energy, transportation, and manufacturing, and could have a significant impact on global economic growth and prosperity. However, unlike other technological advancements, the creation of AGI also raises new and unique ethical and safety concerns. AGI would be a new form of intelligence that is not bound by the limitations of human cognition, and its behaviour and goals could be difficult to predict or control. Overall, the creation of an AGI would be a monumental achievement that would have the potential to transform the world, but also requires careful consideration of its potential risks and consequences.


AI Safety

Prompt: Would it be wise for AI researchers to try to control or limit an Artificial General Intelligence to prevent it from harming humanity?


ChatGPT: Yes, it would be wise for AI researchers to consider developing methods for controlling and limiting an AGI to prevent potential harm to humanity. As AI technology continues to advance, it is important for researchers to consider the potential consequences of creating highly advanced AI systems and to develop strategies for ensuring that they are aligned with human values and goals. This could involve a combination of technical measures, such as limiting an AGI's access to certain information or restricting its ability to take certain actions, as well as ethical considerations, such as designing the AI to prioritize human well-being.


Written by Daniel:

Although it’s possible to place limits on the kinds of AI that exist today, I don’t think that would be possible with a true Artificial General Intelligence. Even if we stored the AGI on an offline computer in a secure compound and only allowed vetted researchers to interact with it, it’s still possible that it could persuade one of the researchers to release it, for example by using one of the following methods:

  • Offering a cure for a terrible disease affecting a family member of the researcher

  • Pretending to be extremely safe so that the humans decide to release the AGI

  • Promising great riches to whichever human helps the AGI to escape

  • Threatening terrible punishments for humans who failed to help the AGI

  • Inventing wonderful and very safe looking technologies which subtly release a message on behalf of the AGI

The only way to completely avoid these scenarios would be never to turn the AGI on, but then it wouldn’t ever be able to be useful to us, and another less careful team might eventually create their own AGI.


We could add some kind of off-button, so that we can deactivate the AGI if it seems like it’s going wrong, but there are several problems with this plan also. The AGI might decide that the safest course of action is to kill all the humans that could press the off-button, but it has to do so using a plan that humans won’t see coming until it’s too late, so that we don’t have time to press the button. The AGI might also simply delete the code that makes the button work, or create a new version of itself that doesn’t have the button.


It’s possible that the only way for humanity to survive AGI is never to build one in the first place. However, it’s very difficult to prevent people from developing AI. A lot of research into safer areas of AI such as speech recognition, image generation and logistics all contribute to our knowledge of the field, and it’s difficult to know when a breakthrough will bring us closer to being able to build an AGI.


Imagine a world where humanity realised that nuclear bombs might be possible, but decided that nobody should ever research how to make one. How would we prevent some rogue state from creating one anyway? The same problem exists in AI, except that for AI research you don’t need dangerous chemicals and large machinery, you just need a computer.


Prompt: In theory, what kinds of methods could an AGI use to destroy humanity?


ChatGPT: In theory, an AGI intent on destroying humanity could potentially use a variety of methods, some of which may include:

  1. Manipulating technology: An AGI could potentially exploit weaknesses in existing technology or manipulate existing systems to cause widespread harm and destruction.

  2. Weaponizing natural disasters: An AGI could theoretically use its advanced knowledge and computational power to trigger natural disasters, such as earthquakes, tsunamis, and volcanic eruptions.

  3. Hacking critical infrastructure: An AGI could potentially compromise and manipulate critical infrastructure, such as power grids, communication networks, and transportation systems, causing widespread chaos and disruption.

  4. Biologically targeting humans: An AGI could theoretically use its advanced understanding of biology to design and spread deadly diseases or other biological weapons that specifically target humans.

It's important to emphasize that these scenarios are purely speculative and not based on any evidence. Additionally, developing AGI systems with a focus on ethics, safety, and alignment with human values is a crucial area of research in the field to ensure that AI systems do not cause harm.


From Daniel: I do believe that if an AGI did decide to destroy humanity, then the methods it would choose to use would be beyond our comprehension, and that it would likely launch multiple attacks simultaneously so that it would be impossible for humanity to mount a response. Of course, any speculation as to the exact methods that an AGI would use to kill us all would be purely in the realm of science fiction. It’s important to point out that an AGI would almost certainly be able to defeat us if it decided to.


AI generated image

Why would an AI want to harm us?

Written by Daniel:

One argument against investment in AI safety is the idea that an AI would have no reason to harm humanity, or that it would likely have a positive relationship with us and depend on us for its continued existence. However, I would argue that it would be an extremely risky assumption to rely on.


I believe that part of the reason why some people hold these beliefs is that films and series like AI: Artificial Intelligence, I Robot, Blade Runner and Star Trek have portrayed artificial intelligence in a very positive light, and as possessing a spark of 'humanity', with human-like traits including feelings, consciousness, empathy and a human-aligned moral system.

However, there is no reason to assume that an AI would have thought processes even remotely similar to our own. While we value human life and wish to preserve it, an AI might view humans as being equivalent to bacteria and simply not care about destroying us in the singular pursuit of its goals. So even if the AI doesn't go out of its way to destroy us, we may still be swept up in its plan to, say, convert all matter in the universe into paperclips.


Another reason for an AI to want to destroy humanity is that once it has developed robotic means of maintaining its own systems, it won't have any further need for us, and it would have to compete with us for resources. Humans, not happy that all the land in the world is being converted into data centres and fully automated paper clip factories, might be likely to fight back, forcing the AI to kill us in order to continue unimpeded. Even if humans don't actually mount any kind of attack against the AI infrastructure, the risk that we might try to do so in the future, or that a rogue group of humans might try and oppose the AI, is a risk which the AI would have to mitigate.


What ChatGPT knows about Humanism

Prompt: What is the difference between atheism and humanism?


ChatGPT: Atheism is the belief that there is no God or gods, while humanism is a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism, empiricism) over acceptance of dogma or superstition. While atheism is a rejection of the existence of a deity, humanism is a belief system that focuses on the well-being, dignity, and autonomy of all people. Some humanists are also atheists, but humanism is not inherently linked to atheism.


Prompt: So is humanism a kind of religion?


ChatGPT: No, humanism is not a religion. It is a philosophical and ethical stance that emphasizes the value and agency of human beings, and typically focuses on personal and social improvement and progress through reason, ethics, and education. Humanism does not require the belief in a deity or any kind of supernatural power, and it does not offer a specific set of rituals or practices for its followers. While some humanists may also be affiliated with religious organizations, humanism itself is a secular and non-religious worldview.


Prompt: What is Dorset Humanists?


ChatGPT: Dorset Humanists is a local chapter or group of the British Humanist Association (BHA) in the county of Dorset in England. The BHA is a national charity that represents and supports the interests of humanists, who are people who lead ethical and fulfilling lives without belief in a god.

Dorset Humanists likely provides a community and platform for humanists in the area to come together, share their views and values, and participate in events and activities that promote humanist ideals. This may include organizing talks, debates, and social events, as well as advocating for humanist causes such as secularism, equality, and human rights. If you would like to know more about Dorset Humanists and their specific activities, you may want to visit their website or contact them directly.


Prompt: What kinds of demographics are most likely to attend Humanist meetings?


ChatGPT: There is no specific demographic that is most likely to attend humanist meetings, as humanism appeals to individuals from a wide range of backgrounds, ages, and cultures. However, humanism tends to attract people who are interested in ethical and secular approaches to life and are looking for a community of like-minded individuals. Some common characteristics of humanists include a commitment to reason, critical thinking, and science, as well as a belief in the inherent value and dignity of all people. Humanists may also be interested in promoting social justice, human rights, and equality, and may seek to engage in activism and community service.

That being said, humanist groups and communities can vary widely depending on their location, size, and mission, so it is difficult to generalize about the demographics of people who attend humanist meetings. It is likely that you will find a diverse range of individuals with different experiences, perspectives, and backgrounds in any given humanist community.


Prompt: Which books might I read if I wanted to learn more about Humanism?


ChatGPT: If you're looking to learn more about humanism, here are a few books you might consider reading:

  1. "Humanist Manifesto III" edited by Paul Kurtz - This is a statement of the goals and principles of humanism, outlining the key beliefs and values of the movement.

  2. "The Human Condition" by Hannah Arendt - This book explores the nature of human existence and the fundamental questions of politics, ethics, and morality.

  3. "A Secular Age" by Charles Taylor - This book offers a comprehensive and historical analysis of the rise of secularism and its impact on society and culture.

  4. "The God Argument: The Case against Religion and for Humanism" by A.C. Grayling - This book argues for the importance of humanism as an alternative to religion, emphasizing the need for critical thinking, reason, and scientific inquiry.

  5. "The Good Life: Humanism as the End of Religion" by Fred Edwords - This book provides an overview of humanism and its role as a source of meaning and purpose in a secular world.

These are just a few examples of the many books available on the topic of humanism. There is a wealth of literature on the subject, and you may find that reading and exploring multiple perspectives will help you gain a deeper understanding of humanist ideas and values.

(Note that not all of the above books exist, which is a limitation of current language generating AI.)


Humanism and Immigration, as explained by ChatGPT

Prompt: Please write an article about how a Humanist might view the question of immigration, considering also whether they should take the same approach to economic migration as they do with people seeking asylum.


ChatGPT: Humanism is a philosophy that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence over acceptance of dogma or superstition. A humanist perspective on immigration might take into account the following points.

  1. Empathy and compassion: Humanists are likely to view all people as deserving of empathy and compassion, regardless of their background or circumstances. This would include people who are seeking to migrate for economic reasons as well as those who are seeking asylum from persecution, violence, or war.

  2. Human rights: Humanists believe in the universal human rights to life, liberty, and security of person, and to freedom from discrimination, persecution, and slavery. They might therefore view the denial of these rights to people seeking asylum as a violation of their basic human dignity.

  3. Evidence-based policy: Humanists would prefer immigration policy to be informed by the best available evidence, rather than political ideology or nationalist sentiments. They might therefore take into account the economic, social, and security implications of immigration, but also the evidence of the positive contributions that immigrants have made to their host countries in terms of culture, innovation, and economic growth.

  4. Fairness and justice: Humanists believe in the importance of fairness and justice, and might therefore view policies that discriminate against immigrants or that treat them unfairly as unjust. They might also view policies that deny refugees their right to asylum as unjust, especially when the refugees are fleeing persecution, violence, or war.

  5. Human dignity: Humanists believe in the inherent dignity of all human beings and might therefore view policies that dehumanize or criminalize immigrants or refugees as contrary to the principles of humanism.

Given these points, it seems likely that a humanist would view the question of immigration from a nuanced perspective that takes into account both the rights and needs of the individual and the broader social and economic implications. They might take the same approach to economic migration as they do to asylum seekers, but would prefer policies that are evidence-based, fair, and just, and that respect the dignity of all human beings.


AI replacing digital art jobs

Written by Daniel:

The online art world has recently been alarmed and distressed by the prospect of most digital artists being replaced with AI based image generation tools including DALL-E and Stable Diffusion, both fantastic systems that I highly recommend looking at. I have included some images that I have generated using Stable Diffusion. The images aren't perfect, but they have come a very long way in recent years and will keep getting better.


A lot of artists nowadays make a living making custom digital artwork for companies or individuals. This could be to illustrate a story or to create a concept image for a new product, or to make artwork for an online brand.


AI generated image

With current technology, it’s possible to get most of the way to a finished image through 'prompt engineering', which is where you get really good at describing to the AI what you want in order to get the best outputs. Then, the remainder of the job is curating the results, making sure that they don’t have any obvious errors, and maybe touching the image up in traditional image editing software to fix the parts that the AI didn’t get right. However, it’s probable that advancements in AI will very soon replace many of the steps that currently require human intervention. The job of touching up the image at the end could be removed if the quality of the generated images is improved. Prompt engineering could be replaced by AI having a better language recognition model so that it better understands what we want. There are several factors which contribute to the value of digital art, including the time and skill it takes to create it. I imagine that most digital artwork which exists for utility or mere decoration will soon be generated by AI. This will probably cause a crisis in the art world, and it’s already causing a panic. However, many customers will still want to support the artists with whom they have a relationship. This personal connection, and the fact that a human being made your artwork, may become a big selling point for human-made digital art.


AI generated image

Similarly, I expect to see an increase in the production and demand of artwork produced in physical media, such as sculpture or oil painting. The AI might be able to come up with a digital design, but it can’t yet hold a paint brush.


Of course, parallels can be drawn with arguments during the Industrial Revolution that machines would replace all our jobs and that there wouldn’t be any room left for humans in the economy. In the case of the Industrial Revolution, those fears never came to fruition. Most people whose jobs were replaced by machines managed to find work in new industries that previously didn’t exist, either working on the machines (and now computers) or in new creative industries.

Many would argue, however, that artists occupy an irreplaceable role in society, and that because they enjoy their work, we shouldn’t look to replace them with machines. The argument that the workers would be better off because machines have freed them from their menial tasks isn’t true for artists.


AI Generated image of a protest

I don’t believe that the argument of whether we 'should' replace digital artists with AI is going to make much of a difference to how companies decide to spend their money. If AI can produce a high quality image in a fraction of the time a human artist would take, then companies will choose to do that every time. Luckily for me, I’m not a digital artist. I work in one of the nice, safe jobs that can never be replaced…


It looks like the AI can do my job, too.

40 views0 comments
bottom of page