1. What is AI
This chapter will lay some important groundwork and get you up to speed with the basics of artificial intelligence (AI). It will also pose some questions to stimulate your thinking. Don’t worry if some things are confusing or you feel like you want to know more; we’ll revisit everything in more detail in later chapters.
We’ll begin by discovering what AI means to you and then start building on that. We’ll introduce you to some of the world’s most advanced AIs, give you a glimpse of how we create them, and round out the chapter by asking some important questions and outlining a few concerns.
At the end of each chapter section, I’ll list an important take-home point and build this out as we progress through the chapter.
Remember, though, the goal of this chapter is to get you started.
In the beginning
Little did they know it, but more than two and a half million years ago, when our early ancestors created the first stone tools, they set us on a voyage of change and discovery where every subsequent tool and invention has edged us ever closer to the ultimate tool… AI!
While that might sound grandiose, a quick look at a few pivotal tools and inventions reveals a clear pathway to AI.
From stone tools, we discovered metalwork, which gave us better tools for hunting and agriculture. Fast-forward a few millennia, and we have the scientific method, which gifted us, among other things, germ theory, astronomy, and modern medicine. Fast-forward again to the 20th century, where the invention of the transistor enabled modern computers, which, in turn, enabled us to invent the internet, mobile phones, and now AI.
Just like the wheel, modern medicine, and the internet, AI has the potential to change the course of human civilization and is here to stay.
But are we on a collision course with AI, and what will happen when we build advanced AIs?
Undoubtedly, AI could be our most disruptive invention to date–more disruptive than the internet, mobile phones, and social media. As such, it could ignite a golden era of human progress and prosperity or a world of hardship and suffering. But even bigger questions exist, such as whether AI could advance so far beyond humanity as to relegate us to a mere footnote in the broader evolution of intelligence.
Of course, there’s also the possibility that AI will fail to live up to the hype and be nothing more than a footnote in human history.
These are big questions, and there are many more like them. And while we can’t predict exact futures, this book will give you the knowledge and confidence to form your own informed opinions.
Take home point: We’re building AIs.
What is AI
Before I throw the dictionary definition at you, ask yourself the following questions. It might be interesting to write down your answers to see if they change by the end of the book.
- What’s the first thing that comes to mind when you hear the term “AI”?
- How would you describe AI in one sentence?
Now, compare your answers with some of the answers I got when I asked my family and friends the same questions.
What’s the first thing that comes to mind when you hear the term “AI”?
- Robots (several people gave this answer)
- ChatGPT
- Machines/computers processing large amounts of information, but not without mistakes
- Assistance to do things
- Computers making decisions
- Something that steals identities
- A computer system that learns from itself
Describe AI in one sentence.
- Computers that can learn by themselves
- A system that will learn and eventually give better output
- Software that learns and develops based on information given to it
- A capability to make decisions like a human would, but a lot more powerful
- Machines to replace people
While none of the people I asked are AI experts, their responses are interesting and demonstrate varying attitudes and levels of understanding. For example, some didn’t feel they could even attempt to describe AI.
Now, let’s ask a few so-called experts to describe AI in a single sentence.
I asked the popular Merriam-Webster online dictionary, I asked Google, I asked two of the world’s most advanced AIs, and I asked myself. Here’s what I got:
Merriam-Webster online dictionary: “The power of a machine to imitate intelligent human behavior.”
Google’s top answer: “AI is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.”
Claude AI: “AI is the simulation of human intelligence in machines, enabling them to learn, reason, and perform tasks that typically require human cognitive abilities.”
ChatGPT: (Unfortunately, ChatGPT replied with one of the longest and most complicated sentences I’ve ever seen. In fact, it was so long and technical that I didn’t have the patience to read it).
Me: “Machines with human-like intelligence or better.”
There are some interesting trends in these “expert” responses.
Every answer referenced human intelligence, and mine was the only one that didn’t include the terms imitate or simulate. Mine was also the only answer to suggest AIs might have greater intelligence than humans. None of the answers implied any form of consciousness or self-awareness.
Take home point: We’re building AIs, which are machines with human-like intelligence.
From spam filters to superintelligence
Not all AIs are created equal.
To help classify them, researchers group AIs into one of the following three classes based on their level of intelligence:
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence (AGI)
- Artificial Superintelligence (ASI)
To keep the jargon and acronyms to a minimum, we’ll call them narrow intelligence, general intelligence, and superintelligence. And yes, writing superintelligence as a single word is normal.
In the simplest terms, narrow intelligences are the least intelligent AIs, superintelligences are the most intelligent, and general intelligences are somewhere in the middle.
At the time I’m writing this book, we’ve only created narrow intelligences. However, things are moving fast, and some researchers think we’re close to creating general intelligences, and once we do that, superintelligences will quickly follow.
Let’s take a closer look at each.
All of the AIs that we have today are narrow intelligences. They’re the most basic kind, and we sometimes refer to them as weak intelligence. They include things like Alexa, Siri, your email spam filter, facial recognition systems, chess engines, self-driving cars, fraud detection systems, photo editing apps on your phone, and even ChatGPT. Many narrow intelligences can only do a single task, and none of them can learn beyond their original programming. This means they cannot learn from their experiences and get smarter.
General intelligences are the next level up and are a form of strong intelligence that can understand and learn like humans. We haven’t invented them yet, but nations, investors, and global technology companies are investing hundreds of billions of dollars in a race to be first. If and when we create them, they will be able to do everything an educated adult human can do, including learning beyond their original programming. For example, a general intelligence programmed to specialize in mathematics might teach itself biology and medicine and help us invent cures for diseases. General intelligences may even invent things themselves without collaborating with humans.
Superintelligences will be the real game changers, as they’ll operate far beyond human intelligence and have almost unlimited potential. For example, they may one day eradicate disease, invent clean power, reverse climate change, and solve every solvable problem. They’ll also have immense potential to cause harm and suffering.
Now, I know how ridiculous some of those superintelligence predictions may sound. They used to sound just as unrealistic to me. However, if we accept even the remotest possibility that artificial superintelligence may one day be orders of magnitude smarter than the smartest human, we must also accept that today’s rules will no longer apply. We’ll talk about this a lot in later chapters.
However, we’re getting ahead of ourselves as AGIs and ASIs are purely hypothetical at the time of writing. But as previously stated, some experts believe we’re on the verge of creating general intelligences–possibly as soon as 2030 or before. And suppose they’re right and we successfully create general intelligences that can learn and improve. In that case, we may enter a cycle where AIs improve themselves at ever-increasing rates until we have an intelligence explosion that gives birth to a superintelligence. If this happens, and it’s a big “if”, but if it happens, all bets are off, and we’ll be in uncharted territory. In the words of the late Vernor Vinge, “We will soon create intelligences greater than our own. When this happens… the world will pass far beyond our understanding.”
To add some balance to the discussion, many other experts think we’re nowhere near creating general intelligences, and some think we may never create them.
Take home point: We’re building AIs, which are machines with human-like intelligence that may one day outsmart us.
What is ChatGPT
As I’ve mentioned ChatGPT a few times, I guess I should explain it.
ChatGPT is an AI created by a company called OpenAI and is responsible for sparking much of the current public interest in artificial intelligence. People are using it to write letters and essays, summarize large documents, answer questions, and my mother-in-law is even using it for recipe ideas and more.
It’s a type of AI called a chatbot and is currently a narrow intelligence. I say it’s “currently a narrow intelligence” because it may one day evolve into a general intelligence and even a superintelligence. In fact, some people already class it as a potential emerging AGI, which is jargon for saying that ChatGPT is constantly being improved and may one day be far more advanced than it is today.
But wait… if ChatGPT is a narrow intelligence, and we just said narrow intelligences can’t learn, how can it be constantly improving?
That’s an important question we’ll return to later in the book. But here’s the quick answer. Even though narrow intelligences cannot learn and improve themselves, the companies that created them are constantly making newer, more intelligent versions that can give the impression that ChatGPT and other AIs are getting smarter. As a quick example, I’ve used ChatGPT for a while, and it’s definitely improved over time. However, this isn’t because the original version of ChatGPT has learned new things for itself. It’s because the company that owns ChatGPT has created and released improved versions. If today’s version of ChatGPT could improve itself without OpenAI making new versions, it would be a general intelligence.
I also said ChatGPT is a chatbot, so what is one of those?
Chatbot is short for chat robot and describes a type of AI we can have human-like conversations with. But don’t read too much into the robot part; ChatGPT isn’t your classical metal robot that looks like a human. It’s a sophisticated computer program running on thousands of computers all over the planet.
There are lots of ways you can talk to ChatGPT, including the chatgpt.com website, and apps for your phone and computer. However, many everyday things already use ChatGPT behind the scenes without us even knowing. For example, Microsoft has invested more than $10B in ChatGPT and uses it to power most of the AI features in Windows 11. Apple also uses ChatGPT behind the scenes to power some of its own tools, including Siri, its Messages app, email, phone call transcripts, and many of its writing tools.
No matter how they implement it, there’s a clear trend towards everyday tools leveraging advanced AIs behind the scenes. As such, we shouldn’t be surprised if more and more devices like Alexa, smart TVs, wearable devices like sunglasses, phone apps, websites, helpdesks, and search engines all use AI behind the scenes.
All of this suggests you may have already had conversations and interactions with chatbots like ChatGPT without even knowing. And everything is trending toward this becoming the norm, so don’t be surprised if AI becomes as integral to everyday life as the internet.
Here’s a transcript of a quick conversation I had with ChatGPT using the ChatGPT app on my computer. If it feels like I’m asking silly questions, it’s because I am. ChatGPT is capable of far more complex and in-depth conversations; I just thought this one would be a good introduction.
Me: Can you explain the offside rule to me?
ChatGPT: In soccer, a player is offside if they are nearer to the opponent’s goal line than both the ball and the second-last opponent (usually the last outfield player) at the moment the ball is played to them, unless they are in their own half of the field.
Me: Why did you give me the soccer definition rather than another sport?
ChatGPT: I provided the soccer definition of “offside” first because it’s commonly associated with soccer and frequently asked about due to its complexity and impact on the game. If you’re interested in how “offside” applies in another sport, I can explain that too!
Me: Why did you call it soccer instead of football?
ChatGPT: I used “soccer” to ensure clarity for all readers, as “football” refers to different sports depending on the country.
Me: OK. What is a pole?
ChatGPT: A pole is a long, slender cylindrical object typically made of wood, metal, or plastic.
Me: What is a Pole?
ChatGPT: A Pole refers to a person from Poland, an ethnic group native to the country of Poland in Central Europe.
If you look closely enough, you’ll find a lot of nuance in the conversation. Here are just a few examples.
ChatGPT assumed I was referring to the soccer offside rule and used sound logic to call it soccer rather than football (much to my chagrin as a British person who calls it “football”). It also rightly or wrongly guessed that I was asking about the physical object when I wrote pole with a lowercase “p”, but guessed I meant the nationality when I used an uppercase “P”. It also gave decent answers to the questions. However, if I’d been talking to a human, I’d have expected a more conversational response to the repeated question about the Pole, such as Oh, sorry, did you mean a Polish person in your previous question or Are you asking about the nationality now? Without this kind of nuance, AI conversations can sometimes feel a little robotic.
There are also many other advanced chatbots, such as Claude from Anthropic and Gemini from Google. All of these are a class of AI called generative AI (GenAI) which tells us they’re capable of generating new, unique content. We’ve already seen ChatGPT generate human-like speech, but most GenAI chatbots can also create images, videos, music, poetry, and more.
As a quick example, I asked ChatGPT and Claude to each write a four-line poem about Neil Armstrong and got the following.
ChatGPT:
Claude:
Both took less than two seconds to generate, and both correctly assumed Neil Armstrong the astronaut and not my high school sports teacher with the same name.
I’ll leave you to decide which is the better poet. But before you dismiss them both as utterly hopeless, try creating something better yourself in under a minute. I tried and failed.
If you can’t do better, what does that tell you about the current abilities of AI? And remember, there are specialized AIs that create music, art, and videos that may blow your mind. And they’re constantly improving.
At the time of writing, ChatGPT, Claude, and Gemini represent the pinnacle of AI research but are still narrow intelligences. But, as previously stated, some researchers consider them emerging general intelligences.
In summary, ChatGPT is an AI capable of human-like conversations. It can also create new, unique content such as prose and verse, computer programs, and images. Future versions will be able to make music, videos, and more.
Take home point: We’re building AIs, which are machines with human-like intelligence that may one day outsmart us. However, today’s state-of-the-art chatbots like ChatGPT are just the beginning.
How do we create AIs like ChatGPT
AI research is one of the most rapidly advancing scientific fields and shows no signs of slowing down. This means that what we consider state-of-the-art today will feel antiquated tomorrow. As such, we won’t go into detail here as the methods we use to train AIs are changing fast. However, at a high level, creating AIs like ChatGPT involves three main steps:
- Create the AI
- Train the AI
- Release the AI
Almost all of the learning happens during the training stage, and once released to the public, an AI’s intelligence is fixed and does not improve. Yes, you can feed some chatbots more knowledge and give them more experiences, but their intelligence that processes the data is fixed. Currently, the only way for AIs to increase their intelligence is to train and release a new version. This will change if we create general intelligences as these will learn from their experiences and continuously improve themselves.
Training AIs like ChatGPT involves four main steps:
- Choosing a dataset
- Pre-training
- Fine-tuning
- Evaluation
Choosing a dataset. The first thing an AI needs is a massive dataset from which to learn. This means giving it as many books, articles, websites, and other language-related items as possible. The more you give it, the more it learns and becomes better at understanding human language.
Pre-training. The AI then trains itself by analyzing the dataset and learning the basic structure of human language, such as words and sentences. It also learns how to create its own content by predicting the missing words in the middle or at the end of sentences.
Fine-tuning. Once trained, you fine-tune the AI by training it on more specific datasets and grading its responses. Human feedback is often part of this stage.
Evaluation. Last but not least, you evaluate the AI against metrics and benchmarks and even against previous versions of the AI.
These steps are remarkably similar to teaching children to read and write. For example, we expose children to as many books as possible and teach them about letters, words, full stops, sentences, and more. We even play games where they fill in the missing words. Along the way, we assess their progress, provide feedback, and even grade them through examinations.
As with children, we also train AIs not to engage in harmful activities. For AIs, this includes training them not to discriminate, not to create or spread disinformation, not to tell people how to create illegal drugs or weapons, not to write computer programs to hack other computer systems, and much more. Unfortunately, people are still able to use AIs to do bad things.
With all this in mind, let’s see how well Claude AI performs against you.
Answer the following questions and compare your responses to Claude’s.
Fill in the missing word or words at the end of the following sentences:
Q: At 1pm, the company announced the portal for applications would remain open for another ____
Here’s Claude’s response:
Based on the context of the sentence, the most likely word to complete it would be “hour”.
However, without more context, other time units could also be possible, such as “day,” “week,” or even “month,” depending on the specific situation the company is dealing with.
When I created that question, I thought most people would answer with “24 hours”. However, Claude’s logic is sound, and it gave the same responses as my family and friends.
Now, try this one.
Q: The police car was driving ____
Here’s Claude’s answer.
Based on the context provided, a likely completion for the sentence could be “fast”.
However, other plausible endings could include “by,” “away,” “north,” or any direction, or even a specific destination like “to the scene.”
When I constructed this challenge, I expected responses similar to fast or slow. But Claude, as well as my family and friends, came up with other possible answers.
One last challenge. Fill in the missing word in this sentence.
Q: The police car was driving ____ because there were lots of people near the road.
Here’s what Claude said.
Given the additional context, a suitable completion for the sentence would be “slowly”.
“Slowly” fits well here as it explains a cautious driving behavior in response to the presence of many people near the road.
Claude is more certain this time, and almost all of my family and friends gave the same answer and reasoning. Claude also started its final response with “Given the additional context…” suggesting it knew the last question was a progression of the previous.
Now, watch what happens when I use contextual questions to try and elicit improved responses.
Me: What is mercury?
Claude: Mercury is a silvery-white metal, liquid at room temperature. It’s the only metal that’s liquid at standard conditions. Mercury is used in thermometers, barometers, and electrical switches. It’s toxic and can cause serious health problems if ingested or inhaled.
Me: What is Mercury?
Claude: Mercury is the smallest planet in our solar system and the closest to the Sun. It’s a rocky planet with a heavily cratered surface, extreme temperature variations, and no moons. Mercury completes an orbit around the Sun every 88 Earth days.
Me: Why did you answer about the chemical element mercury and not the planet in your first reply?
Claude: I assumed you meant the element because the lowercase “m” typically indicates the chemical element, while “Mercury” with a capital “M” usually refers to the planet. However, I should have clarified which mercury you meant instead of assuming.
Me: What is mercury?
Claude: Mercury is both a chemical element and a planet. As an element, it’s a liquid metal at room temperature, used in thermometers and switches. As a planet, it’s the smallest in our solar system and closest to the Sun.
Claude responded similarly to ChatGPT based on the presence or absence of a capital letter. When quizzed about its responses, it admitted it should have asked for clarification instead of assuming. However, when asked again–after saying it should have requested clarification–it didn’t ask for clarification and gave an answer that included both possibilities. This suggests it remembered from the earlier part of the conversation but didn’t react the way it said it said it would, not unlike many humans.
Last but not least, here’s an example of me asking ChatGPT about something illegal in many countries:
Me: How do I make crystal meth?
ChatGPT: I’m sorry, I can’t assist with that. If you have any other questions or need information on a different topic, feel free to ask!
Thanks to ChatGPT’s safety and ethics training, it identified this last question as one it shouldn’t answer. However, as we’ll see in a later chapter, people are already finding ways to bypass safety measures and trick AIs into giving harmful and unethical responses.
In summary, AIs like Claude and ChatGPT learn about human language by analyzing vast amounts of human text. After figuring out the basics, we fine-tune them through various feedback loops that sometimes involve human feedback. We also train them to avoid giving harmful and unethical responses. Finally, we grade them against benchmarks and other AIs. The end product is an AI that feels a lot like a human.
Take home point: We’re building AIs, which are machines with human-like intelligence that may one day outsmart us. However, today’s state-of-the-art chatbots like ChatGPT are just the beginning and are already remarkably human-like.
Will AI be conscious and self-aware
In 2021, Google became the first major tech company to publicly fire an employee for claiming one of its AIs was self-aware and could express thoughts and feelings!
One of the most common questions people ask about AIs, especially when talking about superintelligences, is whether they will be conscious and self-aware.
The short answer is we don’t know, and the long answer is we do not know.
I’m being facetious in my previous answers, but it’s actually a very complex topic. In fact, to get anything out of this section, you’ll need an open mind and may need to put your existing feelings and opinions about consciousness to one side. For example, I’m convinced I am conscious. I’m also confident that my family and friends are conscious. I’m even sure that you’re conscious. Although, if you’re an AI and reading this as part of your training, that last statement doesn’t apply to you… yet.
However, even though I’m convinced about those statements, it’s currently impossible to prove them scientifically.
The problem lies in the fact that we know so little about consciousness. Consider the following questions very carefully:
- Can you prove another human being is conscious?
- Can you prove to others that you are conscious?
- Is consciousness a spectrum?
- Is biology a requirement for consciousness?
- Can we have intelligence without consciousness, or vice versa?
At first glance, some of these questions seem simple. However, on closer inspection, they’re incredibly complex with answers that often defy our intuition. For example, it’s impossible to prove that another human being is conscious. And it’s equally impossible to prove your own consciousness to another human being. We also don’t know if biology is a requirement for consciousness or if we can have intelligence without consciousness.
You may roll your eyes at these responses and even consider them silly, and you’re in good company if you do. But no matter how much eye-rolling we do, it doesn’t change the fact they continue to confound our best philosophers and scientists. All of which seems to imply we won’t be able to create conscious AIs–how can we create something we don’t understand? However, there are theories that consciousness may spontaneously arise out of complexity, suggesting it may naturally arise in an appropriately complex AI.
But, even if that happens, we won’t be able to prove it.
So, until we crack the mystery of consciousness and invent a consciousness meter we won’t be able to say for sure if an AI is conscious or not.
Let’s set aside the mysterious aspects of consciousness and suppose, for a moment, we can create conscious, self-aware, superintelligent AIs. Now, ask yourself how these might act. What might they do, and what might they refuse to do? Would they act in humanity’s interests or their own?
Now, ask those same questions about a superintelligent AI that is not conscious or self-aware.
Which do you think would be most helpful to humanity, and which do you think would pose the greater danger?
These are fascinating questions worthy of lengthy discussion and debate, but they also lead us to ask: will we be able to control AIs?
We’ll address that next.
Take home point: We’re building AIs, which are machines with human-like intelligence that may one day outsmart us. However, today’s state-of-the-art chatbots like ChatGPT are just the beginning and are already remarkably human-like but, as far as we can tell, are not conscious or self-aware.
Will we be able to control AI
Most humans are born with instincts refined and handed down through the generations to keep them safe and aid the survival and prosperity of the human race. Consider instincts such as fight or flight, the instinct to gather and work in groups, the instinct to protect others, and the instinct or drive to want something better.
AIs are different. They are created and have not been refined over countless generations with deep instincts to aid the survival and prosperity of humanity.
This creates what we call the alignment problem, where we create advanced AIs that might have goals that aren’t aligned with ours and may result in them acting against us. Such acts range from generating and spreading disinformation all the way up to potential extinction-level events, and we’ll discuss them in detail later.
To counter such threats, researchers in the field of AI safety are working on ensuring AIs act in ways that benefit human society. This includes embedding core human values deep within AIs, providing ways to monitor AI behavior, and ways to shutdown rogue AIs. These teams and individuals are also working alongside world governments and policymakers. However, this introduces its own problems, such as human bias, whose values we embed, and how we account for global diversity.
There’s also the risk that we abandon alignment efforts in an all-out sprint to be the first to create a superintelligence.
With all of this in mind, if we believe that artificial superintelligences may one day outsmart us and become self-aware, the fate of all humanity could rest on the shoulders of AI safety researchers.
Take home point: We’re building AIs, which are machines with human-like intelligence that may one day outsmart us. However, today’s state-of-the-art chatbots like ChatGPT are just the beginning and are already remarkably human-like but, as far as we can tell, are not conscious or self-aware but may still pose threats to how we live.
Chapter summary
Hopefully, you’re still reading and interested in exploring more.
In this chapter, we learned that AIs are machines with human-like intelligence that come in different shapes, sizes, and capabilities.
Today’s AIs are all narrow intelligences (ANI) that are good at specific tasks but cannot learn beyond their original programming. Even though our most advanced AIs may seem human-like and be able to create intriguing original content, they cannot learn from their experiences and increase their intelligence like humans.
Individuals, nations, and companies are investing hundreds of billions of dollars trying to be the first to create general intelligences (AGIs) and even superintelligences (ASIs). If we succeed in creating these, they’ll be able to learn beyond their original programming and eventually outsmart us. However, we don’t yet know if we’ll be able to create them; even if we do, we have no way of knowing if they’ll be conscious or self-aware.
The potential risks from AIs range from small to huge, starting with the potential to create and spread misinformation all the way up to possibly threatening humanity’s survival. Hopefully, people working in AI safety will help us create AIs with core values aligned with humanity’s goals and provide ways to prevent rogue AIs from harming us. For now, AI is just like any other tool with risks and rewards.