Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Hands-On Artificial Intelligence for Beginners
Hands-On Artificial Intelligence for Beginners

Hands-On Artificial Intelligence for Beginners: An introduction to AI concepts, algorithms, and their implementation

Arrow left icon
Profile Icon Patrick D. Smith Profile Icon Dindi
Arrow right icon
$19.99 per month
Paperback Oct 2018 362 pages 1st Edition
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Patrick D. Smith Profile Icon Dindi
Arrow right icon
$19.99 per month
Paperback Oct 2018 362 pages 1st Edition
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Artificial Intelligence for Beginners

The History of AI

The term Artificial Intelligence (AI) carries a great deal of weight. AI has benefited from over 70 years of research and development. The history of AI is varied and winding, but one ground truth remains tireless researchers have worked through funding growths and lapses, promise and doubt, to push us toward achieving ever more realistic AI.

Before we begin, let's weed through the buzzwords and marketing and establish what AI really is. For the purposes of this book, we will rely on this definition:

AI is a system or algorithm that allows computers to perform tasks without explicitly being programmed to do so.

AI is an interdisciplinary field. While we'll focus largely on utilizing deep learning in this book, the field also encompasses elements of robotics and IoT, and has a strong overlap (if it hasn't consumed it yet) with generalized natural language processing research. It's also intrinsically linked with fields such as Human-Computer Interaction (HCI) as it becomes increasingly important to integrate AI with our lives and the modern world around us.

AI goes through waves, and is bound to go through another (perhaps smaller) wave in the future. Each time, we push the limits of AI with the computational power that is available to us, and research and development stops. This day and age may be different, as we benefit from the confluence of increasingly large and efficient data stores, rapid fast and cheap computing power, and the funding of some of the most profitable companies in the world. To understand how we ended up here, let's start at the beginning.

In this chapter, we will cover the following topics:

  • The beginnings of AI 1950–1974
  • Rebirth – 1980–1987
  • The modern era takes hold 19972005
  • Deep learning and the future 2012–Present

The beginnings of AI –1950–1974

Since some of the earliest mathematicians and thinkers, AI has been a long sought after concept. The ancient Greeks developed myths of the automata, a form of robot that would complete tasks for the Gods that they considered menial, and throughout early history thinkers pondered what it meant to human, and if the notion of human intelligence could be replicated. While it's impossible to pinpoint an exact beginning for AI as a field of research, its development parallels the early advances of computer science. One could argue that computer science as a field developed out of this early desire to create self-thinking machines.

During the second world war, British mathematician and code breaker Alan Turing developed some of the first computers, conceived with the vision of AI in mind. Turing wanted to create a machine that would mimic human comprehension, utilizing all available information to reason and make decisions. In 1950, he published Computing Machinery and Intelligence, which introduced what we now call the Turing test of AI. The Turing test, which is a benchmark by which to measure the aptitude of a machine to mimic human interaction, states that to pass the test, the machine must be able to sufficiently fool a discerning judge as to if it is a human or not. This might sound simple, but think about how many complex items would have to be conquered to reach this point. The machine would be able to comprehend, store information on, and respond to natural language, all the while remembering knowledge and responding to situations with what we deem common sense.

Turing could not move far beyond his initial developments; in his day, utilizing a computer for research cost almost $200,000 per month and computers could not store commands. His research and devotion to the field, however, has earned him accolades. Today, he is widely considered the father of AI and the academic study of computer science.

It was in the summer of 1956, however, that the field was truly born. Just a few months before, researchers at the RAND Corporation developed the Logic Theorist considered the world's first AI program which proved 38 theorems of the Principia Mathematica. Spurred on by this development and others, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon hosted the now famous Dartmouth Summer Research Project on AI, coining the term Artificial Intelligence itself and providing the groundwork for the field. With funding from the Rockefeller Foundation, these four friends brought together some of the most preeminent researchers in AI over the course of the summer to brainstorm and effectively attempt to provide a roadmap for the field. They came from the institutions and companies that were on the leading edge of the computing revolution at the time; Harvard, Dartmouth, MIT, IBM, Bell Labs, and the RAND Corporation. Their topics of discussion were fairly forward-thinking for the time they could have easily been those of an AI conference today—Artificial Neural Networks (ANN), natural language processing (NLP), theories of computation, and general computing frameworks. The Summer Research Project was seminal in creating the field of AI as we know it today, and many of its discussion topics spurned the growth of AI research and development through the 1950s and 1960s.

After 1956, innovation kept up a rapid pace. Years later, in 1958, a researcher at the Cornell Aeronautical Laboratory named Frank Rosenblatt invented one of the founding algorithms of AI, the Perceptron. The following diagram shows the Perceptron algorithm:

The Perceptron algorithm

Perceptrons are simple, single-layer networks that work as linear classifiers. They consist of four main architectural aspects which are mentioned as follows:

  • The input layer: The initial layer for reading in data
  • Weight and biases vectors: Weights help learn appropriate values during training for the connections between neurons, while biases help shift the activation function to fit the desired output
  • A summation function: A simple summation of the input
  • An activation function: A simple mapping of the summed weighted input to the output

As you can see, these networks use basic mathematics to perform basic mathematical operations. They failed to live up to the hype, however, and significantly contributed to the first AI winter because of the vast disappointment they created.

Another important development of this early era of research was adaline. As you can see, adaline attempted to improve upon the perceptron by utilizing continuous predicted values to learn the coefficients, unlike the perceptron, which utilizes class labels. The following diagram shows the adaline algorithm:

These golden years also brought us early advances such as the student program that solved high school algebra programs and the ELIZA Chatbot. By 1963, the advances in the field convinced the newly formed Advanced Research Projects Agency (DARPA) to begin funding AI research at MIT.

By the late 1960s, funding in the US and the UK began to dry up. In 1969, a book named Perceptrons by MIT's Marvin Minsky and Seymour Papert (https://archive.org/details/Perceptrons) proved that these networks could only mathematically compute extremely basic functions. In fact, they went so far as to suggest that Rosenblatt had greatly exaggerated his findings and the importance of the perceptron. Perceptrons were of limited functionality to the field, effectively halting research in network structures.

With both governments releasing reports that significantly criticized the usefulness of AI, the field was shuttled into what has become known as the AI winter. AI research continued throughout the late 1960s and 1970s, mostly under different terminology. The terms machine learning, knowledge-based system, and pattern recognition all come from this period, when researchers had to think up creative names for their work in order to receive funding. Around this time, however, a student at the University of Cambridge named Geoffrey Hinton began exploring ANNs and how we could utilize them to mimic the brain's memory functions. We'll talk a lot more about Hinton in the following sections and throughout this book, as he has become one of the most important figures in AI today.

Rebirth –1980–1987

The 1980s saw the birth of deep learning, the brain of AI that has become the focus of most modern AI research. With the revival of neural network research by John Hopfield and David Rumelhart, and several funding initiatives in Japan, the United States, and the United Kingdom, AI research was back on track.

In the early 1980s, while the United States was still toiling from the effects of the AI Winter, Japan was funding the fifth generation computer system project to advance AI research. In the US, DARPA once again ramped up funding for AI research, with business regaining interest in AI applications. IBM's T.J. Watson Research Center published a statistical approach to language translation (https://aclanthology.info/pdf/J/J90/J90-2002.pdf), which replaced traditional rule-based NLP models with probabilistic models, the shepherding in the modern era of NLP.

Hinton, the student from the University of Cambridge who persisted in his research, would make a name for himself by coining the term deep learning. He joined forces with Rumelhart to become one of the first researchers to introduce the backpropagation algorithm for training ANNs, which is the backbone of all of modern deep learning. Hinton, like many others before him, was limited by computational power, and it would take another 26 years before the weight of his discovery was really felt.

By the late 1980s, the personal computing revolution and missed expectations threatened the field. Commercial development all but came to a halt, as mainframe computer manufacturers stopped producing hardware that could handle AI-oriented languages, and AI-oriented mainframe manufacturers went bankrupt. It had seemed as if all had come to a standstill.

The modern era takes hold – 1997-2005

AI further entered the public discourse in 1997 when IBM's Deep Blue system beat world champion chess grandmaster Garry Kasparov. Within a year, a former student of Geoffrey Hinton's, Yann LeCun, developed the Convolutional Neural Network at Bell Labs, which was enabled by the backpropagation algorithm and years of research into computer vision tasks. Hochreiter and Schmidhuber invented the first memory unit, the long short-term memory unit (LSTM), which is still used today for sequence modeling.

ANNs still had a way to go. Computing and storage limitations prevented these networks from scaling, and other methods such as support vector machines (SVMs) were developed as alternatives.

Deep learning and the future – 2012-Present

AI has made further strides in the past several years than in the 60-odd years since its birth. Its popularity has further been fueled by the increasingly public nature of its benefits self-driving cars, personal assistants, and its ever-ubiquitous use in social media and advertising. For most of its history, AI was a field with little interaction with the average populace, but now it's come to the forefront of international discourse.

Today's age of AI has been the result of three trends:

  • The increasing amount of data and computing power available to AI researchers and practitioners
  • Ongoing research by Geoffrey Hinton and his lab at the University of Toronto into deep neural networks
  • Increasingly public applications of AI that have driven adoption and further acceptance into mainstream technology culture

Today, companies, governments, and other organizations have benefited from the big data revolution of the mid 2000s, which has brought us a plethora of data stores. At last, AI applications have the requisite data to train. Computational power is cheap and only getting cheaper.

On the research front, in 2012, Hinton and two of his students were finally able to show that deep neural networks were able to outperform all other methods in image recognition in the large-scale visual recognition challenge. The modern era of AI was born.

Interestingly enough, Hinton's team's work on computer vision also introduced the idea of utilizing Graphics Processing Units (GPUs) to train deep networks. It also introduced dropout and ReLu, which have become cornerstones of deep learning. We'll discuss these in the coming chapters. Today, Hinton is the most cited AI researcher on the planet. He is a lead data scientist at Google Brain and has been tied to many major developments in AI in the modern era.

AI was further thrown into the public sphere when, in 2011, IBM Watson defeated the world Jeopardy champions, and in 2016 Google's AlphaGo defeated the world grand champion at one of the most challenging games known to man: Go.

Today, we are closer than ever to having machines that can pass the Turing test. Networks are able to generate ever more realistic sounding imitations of speeches, images, and writing. Reinforcement learning methods and Ian Goodfellow's GANs have made incredible strides. Recently, there has been emerging research that is working to demystify the inner workings of deep neural networks. As the field progresses, however, we should all be mindful of overpromising. For most of its history, companies have often overpromised regarding what AI can do, and in turn, we've seen a consistent disappointment in its abilities. Focusing the abilities of AI on only certain applications, and continuing to view research in the field from a biological perspective, will only hurt its advancement going forward. In this book, however, we'll see that today's practical applications are directed and realistic, and that the field is making more strides toward true AI than ever before.

Summary

Since its beginnings in the 1940s and 1950s, AI has made great bounds. Many of the technologies and ideas that we are utilizing today are directly based on these early discoveries. Over the course of the latter half of the 20th century, pioneers such as Geoffrey Hinton have pushed AI forward through peaks and busts. Today, we are on track to achieve sustained AI development for the foreseeable future.

The development of AI technology has been closely aligned with the development of new hardware and increasingly large data sources. As we'll see throughout this book, great AI applications are built with data constraints and hardware optimization in mind. The next chapter will introduce you to the fundamentals of machine learning and AI. We will also cover probability theory, linear algebra, and other elements that will lay the groundwork for the future chapters.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Enter the world of AI with the help of solid concepts and real-world use cases
  • Explore AI components to build real-world automated intelligence
  • Become well versed with machine learning and deep learning concepts

Description

Virtual Assistants, such as Alexa and Siri, process our requests, Google's cars have started to read addresses, and Amazon's prices and Netflix's recommended videos are decided by AI. Artificial Intelligence is one of the most exciting technologies and is becoming increasingly significant in the modern world. Hands-On Artificial Intelligence for Beginners will teach you what Artificial Intelligence is and how to design and build intelligent applications. This book will teach you to harness packages such as TensorFlow in order to create powerful AI systems. You will begin with reviewing the recent changes in AI and learning how artificial neural networks (ANNs) have enabled more intelligent AI. You'll explore feedforward, recurrent, convolutional, and generative neural networks (FFNNs, RNNs, CNNs, and GNNs), as well as reinforcement learning methods. In the concluding chapters, you'll learn how to implement these methods for a variety of tasks, such as generating text for chatbots, and playing board and video games. By the end of this book, you will be able to understand exactly what you need to consider when optimizing ANNs and how to deploy and maintain AI applications.

Who is this book for?

This book is designed for beginners in AI, aspiring AI developers, as well as machine learning enthusiasts with an interest in leveraging various algorithms to build powerful AI applications.

What you will learn

  • Use TensorFlow packages to create AI systems
  • Build feedforward, convolutional, and recurrent neural networks
  • Implement generative models for text generation
  • Build reinforcement learning algorithms to play games
  • Assemble RNNs, CNNs, and decoders to create an intelligent assistant
  • Utilize RNNs to predict stock market behavior
  • Create and scale training pipelines and deployment architectures for AI systems

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 31, 2018
Length: 362 pages
Edition : 1st
Language : English
ISBN-13 : 9781788991063
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Oct 31, 2018
Length: 362 pages
Edition : 1st
Language : English
ISBN-13 : 9781788991063
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 131.97
Artificial Intelligence and Machine Learning Fundamentals
$32.99
Hands-On Artificial Intelligence for Beginners
$54.99
Artificial Intelligence By Example
$43.99
Total $ 131.97 Stars icon
Banner background image

Table of Contents

14 Chapters
The History of AI Chevron down icon Chevron up icon
Machine Learning Basics Chevron down icon Chevron up icon
Platforms and Other Essentials Chevron down icon Chevron up icon
Your First Artificial Neural Networks Chevron down icon Chevron up icon
Convolutional Neural Networks Chevron down icon Chevron up icon
Recurrent Neural Networks Chevron down icon Chevron up icon
Generative Models Chevron down icon Chevron up icon
Reinforcement Learning Chevron down icon Chevron up icon
Deep Learning for Intelligent Agents Chevron down icon Chevron up icon
Deep Learning for Game Playing Chevron down icon Chevron up icon
Deep Learning for Finance Chevron down icon Chevron up icon
Deep Learning for Robotics Chevron down icon Chevron up icon
Deploying and Maintaining AI Applications Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.