Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Cognitive Computing with IBM Watson

You're reading from   Cognitive Computing with IBM Watson Build smart applications using artificial intelligence as a service

Arrow left icon
Product type Paperback
Published in Apr 2019
Publisher Packt
ISBN-13 9781788478298
Length 256 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Tanmay Bakshi Tanmay Bakshi
Author Profile Icon Tanmay Bakshi
Tanmay Bakshi
Robert High Robert High
Author Profile Icon Robert High
Robert High
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Background, Transition, and the Future of Computing FREE CHAPTER 2. Can Machines Converse Like Humans? 3. Computer Vision 4. This Is How Computers Speak 5. Expecting Empathy from Dumb Computers 6. Language - How Watson Deals with NL 7. Structuring Unstructured Content Through Watson 8. Putting It All Together with Watson 9. Future - Cognitive Computing and You 10. Another Book You May Enjoy

Workings of machine learning

ML is still an umbrella term—there are many different ways in which we can implement it, namely, K-means clustering, logistic regression, linear regression, support vector machines, and many more. In this book, we'll be mainly focusing on one type of machine learning, that is, artificial neural networks (ANNs).

ANN, or neural networks for short, are a set of techniques, some of which can be referred to as deep learning. It is a type of machine learning algorithm that is, at a very high-level, inspired by the structure of our biological nervous systems. By high-level, we mean that the algorithms are nowhere near to being the same. As a matter of fact, we barely understand how our nervous system learns in the first place. But even the part that was inspired by our nervous system, its structure, is still primitive. While your brain may have hundreds of different kinds of neurons arranged in a type of a web with over 100 trillion synapses, ANNs, so far, only have a handful of different kinds of neurons arranged in a layered formation, and have, at most, a few hundred million artificial synapses.

Machine learning algorithms, including ANNs, learn in the following two ways:

  • Supervised learning: This method of learning allows the machine to learn by example. The computer is shown numerous input-output pairs, and it learns how to map input to output, even if it has never seen a certain input before. Since supervised learning systems require input and output to learn mappings, it's typically more difficult to collect data for these systems. If you'd like to train a supervised learning system to detect cats and dogs in photos, you'd need to have massive, hand-labeled datasets of images of cats and dogs and train the algorithm.
  • Unsupervised learning: This method of learning allows the machine to learn entirely on its own. It's only shown a certain set of data, and tries to learn representations that fit the data, and can then represent new data that it has never seen before. Due to the fact that only input data is required, the method of data collection for unsupervised learning is typically easier. You'll see some examples toward the end of the book.

You can also combine these methods into a semi-supervised machine learning method, but it depends on the individual use case.

Machine learning and its uses 

The machine learning technology surrounds our everyday lives, even when we don't realize it. In the following section, we can see a few examples of how ML makes our everyday lives easier:

  • Netflix: Whenever you watch a certain show on Netflix, it's constantly learning about you, your profile, and the types of shows you like to watch. Out of its database of available movies and shows, it can recommend certain ones that it practically knows that you're going to like.
  • Amazon: Right as you view, search for, or buy a product, Amazon's open source DSSTNE AI is tracking you, and will try to recommend new products that you may want to buy. Plus, it won't just recommend similar products that are in the same category or by the same brand, but it'll get down to the intricate details in suggesting those products to you, such as what others bought after viewing this product, and the specifications of those products.
  • Siri: Nowadays, Apple's Siri isn't just a personal assistant; it analyzes practically everything you do on your phone to make your life more efficient. It'll recommend apps that you may want to launch right on the lock screen, Face ID enables 3D facial recognition in an instant on the Neural Engine (mobile neural network ASIC); and Siri shortcuts will now predict applications that you may want to open, or other media that you may want to take a look at.
  • Tesla Autopilot: When you get on the highway in your Tesla car, your hands are probably no longer on the steering wheel, because you let autopilot take over. Using AI, your car is able to drive itself more safely than any other human ever could, by maintaining a specific preset distance between your car and the next.

Cons of machine learning 

The big bad machine is taking over! This is simply untrue. In fact, this is why IBM doesn't talk about this tech as artificial intelligence but rather as augmented intelligence. It's a method of computing that extends our cognitive ability, and enhances our reasoning capabilities, whereas artificial intelligence sounds a lot more like a true, simulated intelligence.

Whenever the term AI is used in this book, we're referring to augmented intelligence, unless otherwise stated.

There are two reasons why the majority of people believe that machine learning is here to take over humanity: namely, due to bias, and lack of understanding. 

The bare-bones principles of AI have existed for long before most of us were even born. However, even as those principles came about, and before we truly understood what AI can and can't do, people started writing books and producing movies about computers taking over (for example, The Terminator, HAL, and more). This is the bias piece, which makes it hard for people to take out of their minds before they look at the reality of the technology—what machines can and cannot do from an architectural standpoint in the first place.

Also from the surface, AI looks like a very complex technology. All the mathematics and algorithms that go behind it look like a magical black box to most people. Because of this lack of understanding, people succumb to the aforementioned bias.

The primary fear that the general public has of AI is certainly the singularity, which is the point of no return, wherein AI becomes self-aware, conscious in a way and so intelligent that it's able to transcend to another level in which we can't understand what it does or why it does it. However, with the current fundamentals of computing itself, this result is simply impossible. Let's see why this is impossible with the following example.

Even as humans, we technically aren't conscious; it's only an illusion created by the very complex way our brain processes, saves, and refers back to information. Take this example: we all think that we process information by perceiving it. We look at an object and consciously perceive it, and that perception allows us or our consciousness to process it. However, this isn't true.

Let's say that we have a blind person with us. We ask them Are you blind?, and of course they'd say Yes, since they can consciously perceive that, and because they can't see. So far, this fits the hypothesis that most people have, as stated previously.

However, let's say we have a blind person with Anton-Babinski syndrome and we ask them Are you blind? and they affirm that they can see. Then we ask them How many fingers am I holding up? and they then reply with a random number. We ask them why they replied with that random number, and they then confabulate a response. Seems weird, doesn't it?

The question that arises is this: if the person can consciously realize that they can't see, then why don't they realize they're blind? There are some theories, the prevailing one stating that the visual input center of the brain isn't telling the rest of the brain anything at all. It's not even telling the brain that there is no visual input! Because of this, the rest of the neural network in the brain gets confused. This proves that there's a separation, a clear distinction, between the part of the brain that deals with the processing of information, and the part that deals with the conscious perception of that information—or, at least, forms that illusion of perception.

We can learn more on the Anton-Babinski syndrome at the following link:  (https://en.wikipedia.org/wiki/Anton%E2%80%93Babinski_syndrome).

And here's a link to a YouTube video from Vsauce that talks about consciousness and what it truly is: (https://www.youtube.com/watch?v=qjfaoe847qQ).

And, of course, the entire Vsauce LEANBACK: (https://www.youtube.com/watch?v=JoR0bMohcNo&list=PLE3048008DAA29B0A).

There's even more evidence that hints toward the fact that consciousness isn't truly what we think of it: the theory of mind.

You may have heard of Koko the Gorilla. She was trained on sign language, so she could communicate with humans. However, researchers noticed something very interesting in Koko and other animals that were trained to communicate with humans: they don't ask questions.

This is mostly because animals don't have a theory of mind—while they may be self-aware, they aren't aware of the awareness: they aren't meta-cognizant. They don't realize that others also have a separate awareness and mind. This is an ability that, so far, we've only seen in humans.

In fact, some very young humans who are under four years old don't display this theory of mind. It's usually tested with the Sally-Anne test. It goes a little something like this:

  1. The child is shown two dolls. Their names are Sally and Anne.
  2. Sally and Anne are in a room. Sally has a basket, and Anne has a box.
  3. Sally has a marble, and she puts it in the basket.
  4. Sally goes for a walk outside.
  5. Anne takes the marble from Sally's basket, and puts it in her own box.
  6. Sally comes back from her walk, and she wants her marble. Where would Sally look for it?

If the child answers with the box, then they don't have that theory of mind. They don't realize that Sally and Anne (the dolls in this case) have separate minds, points of view. If they answer with the basket, then they realize that Sally doesn't know that Anne moved the marble from the basket to the box; they have a theory of mind.

When you put all of this together, it really starts to seem that consciousness, in the way that we think about it, really doesn't exist. It only exists as an extremely complex illusion put together by various factors, including memory and sense of time, language, self-awareness, and infinitely recursive meta-cognition, which is basically thinking about the thought itself, in an infinite loop.

To add on top of that, we don't understand how our brains are able to piece together such complex illusions in the first place. We also have to realize that any problems that we face with classical computing, due to the very fundamentals of computing itself, will apply here as well. We're dealing with math - not fundamental quantum information. Math is a human construct, built to understand, formally recognize, agree upon, and communicate the rules of the universe we live in. Realizing this, if we were to write down every single mathematical operation behind an ANN, and over the process of decades, go through the results manually on paper, would you consider the paper, the pen, or the calculator, conscious? We'd say not! So then, why would we consider an accelerated version of this, on the computer, as conscious, or being capable of self-awareness?

There is one completely rational fear of machine learning though, that humans themselves will train the computer to do negative things. This is true and it will happen. There is no way to regulate the usage of this technology. It's a set of math or an algorithm and if you ban the usage of it, someone will just implement it from scratch and use their own implementation. It's like banning the usage and purchase of guns, swords, or fire, but then, people will build their own. It's just that building a gun may be very difficult, but building AI is relatively easier, thanks to the vast amount of source code, research papers, and more that have already been published on the internet.

However, we have to trust that, similar to all other technologies that humans have developed, ML will be used for good, bad, and to prevent people from using it for bad as well. People will use ML to create cyber threats that disguise themselves from anti-viruses, but then AI systems can detect those cyber threats in turn, by using ML.

We've seen that people have used and will continue to use ML to create fake videos of people doing whatever they want them to. For example, start-ups like Lyrebird create fake audio, and other startups create fake videos of Barack Obama saying anything they want him to say. However, there are still very subtle patterns that let us detect whether a video is real or fake patterns that humans and conventional algorithms simply cannot detect, but ML technology can.

You have been reading a chapter from
Cognitive Computing with IBM Watson
Published in: Apr 2019
Publisher: Packt
ISBN-13: 9781788478298
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $24.99/month. Cancel anytime