The big bad machine is taking over! This is simply untrue. In fact, this is why IBM doesn't talk about this tech as artificial intelligence but rather as augmented intelligence. It's a method of computing that extends our cognitive ability, and enhances our reasoning capabilities, whereas artificial intelligence sounds a lot more like a true, simulated intelligence.
Whenever the term AI is used in this book, we're referring to augmented intelligence, unless otherwise stated.
There are two reasons why the majority of people believe that machine learning is here to take over humanity: namely, due to bias, and lack of understanding.Â
The bare-bones principles of AI have existed for long before most of us were even born. However, even as those principles came about, and before we truly understood what AI can and can't do, people started writing books and producing movies about computers taking over (for example, The Terminator, HAL, and more). This is the bias piece, which makes it hard for people to take out of their minds before they look at the reality of the technology—what machines can and cannot do from an architectural standpoint in the first place.
Also from the surface, AI looks like a very complex technology. All the mathematics and algorithms that go behind it look like a magical black box to most people. Because of this lack of understanding, people succumb to the aforementioned bias.
The primary fear that the general public has of AI is certainly the singularity, which is the point of no return, wherein AI becomes self-aware, conscious in a way and so intelligent that it's able to transcend to another level in which we can't understand what it does or why it does it. However, with the current fundamentals of computing itself, this result is simply impossible. Let's see why this is impossible with the following example.
Even as humans, we technically aren't conscious; it's only an illusion created by the very complex way our brain processes, saves, and refers back to information. Take this example: we all think that we process information by perceiving it. We look at an object and consciously perceive it, and that perception allows us or our consciousness to process it. However, this isn't true.
Let's say that we have a blind person with us. We ask them Are you blind?, and of course they'd say Yes, since they can consciously perceive that, and because they can't see. So far, this fits the hypothesis that most people have, as stated previously.
However, let's say we have a blind person with Anton-Babinski syndrome and we ask them Are you blind? and they affirm that they can see. Then we ask them How many fingers am I holding up? and they then reply with a random number. We ask them why they replied with that random number, and they then confabulate a response. Seems weird, doesn't it?
The question that arises is this: if the person can consciously realize that they can't see, then why don't they realize they're blind? There are some theories, the prevailing one stating that the visual input center of the brain isn't telling the rest of the brain anything at all. It's not even telling the brain that there is no visual input! Because of this, the rest of the neural network in the brain gets confused. This proves that there's a separation, a clear distinction, between the part of the brain that deals with the processing of information, and the part that deals with the conscious perception of that information—or, at least, forms that illusion of perception.
We can learn more on the Anton-Babinski syndrome at the following link:  (https://en.wikipedia.org/wiki/Anton%E2%80%93Babinski_syndrome).
And here's a link to a YouTube video from Vsauce that talks about consciousness and what it truly is: (https://www.youtube.com/watch?v=qjfaoe847qQ).
And, of course, the entire Vsauce LEANBACK: (https://www.youtube.com/watch?v=JoR0bMohcNo&list=PLE3048008DAA29B0A).
There's even more evidence that hints toward the fact that consciousness isn't truly what we think of it: the theory of mind.
You may have heard of Koko the Gorilla. She was trained on sign language, so she could communicate with humans. However, researchers noticed something very interesting in Koko and other animals that were trained to communicate with humans:Â they don't ask questions.
This is mostly because animals don't have a theory of mind—while they may be self-aware, they aren't aware of the awareness: they aren't meta-cognizant. They don't realize that others also have a separate awareness and mind. This is an ability that, so far, we've only seen in humans.
In fact, some very young humans who are under four years old don't display this theory of mind. It's usually tested with the Sally-Anne test. It goes a little something like this:
- The child is shown two dolls. Their names are Sally and Anne.
- Sally and Anne are in a room. Sally has a basket, and Anne has a box.
- Sally has a marble, and she puts it in the basket.
- Sally goes for a walk outside.
- Anne takes the marble from Sally's basket, and puts it in her own box.
- Sally comes back from her walk, and she wants her marble. Where would Sally look for it?
If the child answers with the box, then they don't have that theory of mind. They don't realize that Sally and Anne (the dolls in this case) have separate minds, points of view. If they answer with the basket, then they realize that Sally doesn't know that Anne moved the marble from the basket to the box; they have a theory of mind.
When you put all of this together, it really starts to seem that consciousness, in the way that we think about it, really doesn't exist. It only exists as an extremely complex illusion put together by various factors, including memory and sense of time, language, self-awareness, and infinitely recursive meta-cognition, which is basically thinking about the thought itself, in an infinite loop.
To add on top of that, we don't understand how our brains are able to piece together such complex illusions in the first place. We also have to realize that any problems that we face with classical computing, due to the very fundamentals of computing itself, will apply here as well. We're dealing with math - not fundamental quantum information. Math is a human construct, built to understand, formally recognize, agree upon, and communicate the rules of the universe we live in. Realizing this, if we were to write down every single mathematical operation behind an ANN, and over the process of decades, go through the results manually on paper, would you consider the paper, the pen, or the calculator, conscious? We'd say not! So then, why would we consider an accelerated version of this, on the computer, as conscious, or being capable of self-awareness?
There is one completely rational fear of machine learning though, that humans themselves will train the computer to do negative things. This is true and it will happen. There is no way to regulate the usage of this technology. It's a set of math or an algorithm and if you ban the usage of it, someone will just implement it from scratch and use their own implementation. It's like banning the usage and purchase of guns, swords, or fire, but then, people will build their own. It's just that building a gun may be very difficult, but building AI is relatively easier, thanks to the vast amount of source code, research papers, and more that have already been published on the internet.
However, we have to trust that, similar to all other technologies that humans have developed, ML will be used for good, bad, and to prevent people from using it for bad as well. People will use ML to create cyber threats that disguise themselves from anti-viruses, but then AI systems can detect those cyber threats in turn, by using ML.
We've seen that people have used and will continue to use ML to create fake videos of people doing whatever they want them to. For example, start-ups like Lyrebird create fake audio, and other startups create fake videos of Barack Obama saying anything they want him to say. However, there are still very subtle patterns that let us detect whether a video is real or fake patterns that humans and conventional algorithms simply cannot detect, but ML technology can.