Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Cognitive Computing with IBM Watson

You're reading from   Cognitive Computing with IBM Watson Build smart applications using artificial intelligence as a service

Arrow left icon
Product type Paperback
Published in Apr 2019
Publisher Packt
ISBN-13 9781788478298
Length 256 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Tanmay Bakshi Tanmay Bakshi
Author Profile Icon Tanmay Bakshi
Tanmay Bakshi
Robert High Robert High
Author Profile Icon Robert High
Robert High
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Background, Transition, and the Future of Computing FREE CHAPTER 2. Can Machines Converse Like Humans? 3. Computer Vision 4. This Is How Computers Speak 5. Expecting Empathy from Dumb Computers 6. Language - How Watson Deals with NL 7. Structuring Unstructured Content Through Watson 8. Putting It All Together with Watson 9. Future - Cognitive Computing and You 10. Another Book You May Enjoy

Transitioning from conventional to cognitive computing

Currently, the world of computing is undergoing a massive shift, turning into a new plane altogether, of machine learning technology. This is the new necessity due to the massive rise in data, its complexity, and the availability of more and more computing power.

This new computing paradigm is all about finding patterns in data so complex that its problems were so far deemed to be unsolvable by computers—problems that are trivial to humans, even children, such as natural language understanding and playing games, such as chess and Go. A new kind of algorithm was needed to understand data the way a biological neural network does. This new algorithm or solution is computing, which is known as cognitive computing.

IBM realized the potential in machine learning even before it went mainstream, and created Watson, a set of tools that we, the developers, can use in our applications to incorporate cognitive computing without the manual implementation of that technology.

Limitations of conventional computing

Traditionally, computers have been good at one thing, and that is mathematical logic. They're amazing at processing mathematical operations at a rate many orders of magnitude faster than any human could ever be, or will ever be, able to. However, that in itself is a huge problem, as computers have been designed in such a way that they can't work with data if we can't express the algorithm in a set of mathematical operations that actually understands that data.

Therefore, tasks that humans find simple, such as understanding natural languages, visual, and auditory information, are practically impossible for computers to perform. Why? Well, let's take a look at the sentence I shot an elephant in my pyjamas.

What does that sentence mean? Well, if you were to think about it, you'd say that it means a person, clad in his pyjamas, is taking a photograph of the elephant. However, the sentence is ambiguous; we may assume questions such as, Is the elephant wearing the pyjamas?, and Is the human hunting the elephant? There are many different ways that we could interpret this.

However, if we take into account the fact that the person mentions that this is Tom, and that Tom is a photographer, then we know that pyjamas are usually associated with humans and that elephants and animals in general don't usually wear clothes. We can then understand the sentence the way it's meant to be understood.

The contextual resolution that went behind understanding the sentence is something that comes naturally to us humans. Natural language is something we're built to be great at understanding and it's quite literally encoded within our Forkhead box Protein P2 (FOXP2) gene; it's an innate ability of ours.

There's proof that natural language is encoded within our genes, even down to the way it's structured. Even if different languages were developed from scratch by different cultures in complete isolation from one another, they have the same, very basic, underlying structure, such as nouns, verbs, and adjectives.

But there's a problem, there's a (sometimes unclear) difference between knowledge and understanding. For example, when we ride a bike, we know how to ride a bike, but we don't necessarily understand how to ride a bike. All of the balancing, the gyroscopic movement, and tracking is a very complex algorithm that our brain runs on, without even realizing it, when we ride a bike. If we were to ask someone to write all the mathematical operations that go behind riding a bike, it would be next to impossible for them to do so, unless they're a physicist. You can find out more about this algorithm, the distinction between knowledge and understanding, how the human mind adapts, and more, with this video by SmarterEveryDay on YouTube: https://www.youtube.com/watch?v=MFzDaBzBlL0.

Similarly, we know how to understand natural language, but we don't completely understand the extremely complex algorithm that goes behind understanding it.

Since we don't understand that complex algorithm, we cannot express it mathematically and, hence, computers cannot understand natural language data, until we provide them the algorithms to do so.

Similar logic applies to visual data and auditory data, or practically any other kind of information that we, as humans, are naturally good at recognizing, but are simply unable to create algorithms for.

There are also some cases in which humans and computers can't work well with data. In a majority of the cases, this would be high-diversity tabular data with many features. A great example of this kind of data is fraud detection data, in which we have lots of features, location, price, category of purchase, and time of day, just to name a few. At the same time, however, there is a lot of diversity. Someone could buy a plane ticket once a year for a vacation, but it wouldn't be a fraudulent purchase as it was made by the owner of the card with a clear intention.

Because of the high diversity, high feature count, and the fact that it's better to be safe than sorry when it comes to this kind of fraud detection, there are numerous points at which a user could get frustrated while working with this system. A real-life example is when I was trying to order an iPhone on the launch day. As this was a very rushed ordeal, I tried to add my card to Apple Pay beforehand. Since I was trying to add my card to Apple Pay with a different verification method than the default, my card provider's algorithm thought someone was committing fraud and locked down my account. Fortunately, I still ended up getting it on launch day, using another card.

In other cases, these systems end up failing altogether, especially when we employ social engineering tricks, such as connecting with other humans on a personal level and psychologically tricking them into trusting us to get into people's accounts.

Solving conventional computing's problems

To solve computing problems, we use machine learning (ML) technology.

However, we need to remember one distinction between machine learning and artificial intelligence (AI).

By the very bare-bone definitions, AI is a term for replicating organic, or natural, intelligence (that is, the human mind) within a computer. Up until now, this has been an impossible feat due to numerous technical and physical limitations.

However, the term AI is usually confused with many other kinds of systems. Usually, the term is used for any computer system that displays the ability to do something that we thought required human intelligence.

For example, the IBM DeepBlue is the machine that played and won chess against the world champion, Garry Kasparov, in 1997.  This is not artificial intelligence as it doesn't understand how to play chess; nor does it learn how to play the game. Rather, humans hardcode the rules of chess, and the algorithm plays like this:

  • For this current chess board, what are all the possible moves I could make?
  • For all of those boards, what are all the moves my opponent could make?
  • For all of those boards, what are all the possible moves that I could make?

It'll do that over and over, until it has a tree of almost every chess board possible in this game. Then, it chooses the move that, in the end, has the least likelihood of losing, and the highest likelihood of winning for the computer.

You can call this a rule-based system, and it's a stark contrast from what AI truly is.

On the other hand, a specific type of AI, ML, gets much closer to what we think of as AI. We like to define it as creating mathematical models that transform input data into predictions. Imagine being able to represent the method through how you can determine whether a set of pixels contains a cat or dog!

In essence, instead of us humans trying our best to quantify different concepts into mathematical algorithms, the machine can do it for us. The theory is that it's a set of math that can adapt to any other mathematical function, when given enough time, energy, and data.

A perfect example of machine learning in action is IBM's DeepQA algorithm which went behind Watson when it played and won Jeopardy!, Watson played on the game show against the two best human competitors on the game show, namely Ken Jennings and Brad Rutter. Jeopardy, is a game with puns, riddles, and wordplay in each clue—clues such as This trusted friend was the first non-dairy powdered creamer.

If we were to analyze this from a naive perspective, we'd realize that the word friend, which is usually associated with humans, simply cannot be related to a creamer, which has the attributes the first, powdered, and non-dairy. However, if you were to understand the wordplay behind it, you'd realize the answer is What is coffee mate?, since mate means trusted friend, and coffee mate was the first non-dairy powdered creamer.

Therefore, machine learning is essentially a set of algorithms which, when combined with even more systems, such as rule-based systems one could, theoretically, help us simulate the human mind within a computer. Whether or not we'll get there is another discussion altogether, considering the physical limitations around the hardware and architecture of the computers themselves. However, we believe that not only will we not reach this stage, but it's something we wouldn't want to do in the first place.

You have been reading a chapter from
Cognitive Computing with IBM Watson
Published in: Apr 2019
Publisher: Packt
ISBN-13: 9781788478298
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image