Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Cognitive Computing with IBM Watson
Cognitive Computing with IBM Watson

Cognitive Computing with IBM Watson: Build smart applications using artificial intelligence as a service

Arrow left icon
Profile Icon Robert High Profile Icon Tanmay Bakshi
Arrow right icon
NZ$45.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (4 Ratings)
eBook Apr 2019 256 pages 1st Edition
eBook
NZ$45.99
Paperback
NZ$56.99
Subscription
Free Trial
Arrow left icon
Profile Icon Robert High Profile Icon Tanmay Bakshi
Arrow right icon
NZ$45.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (4 Ratings)
eBook Apr 2019 256 pages 1st Edition
eBook
NZ$45.99
Paperback
NZ$56.99
Subscription
Free Trial
eBook
NZ$45.99
Paperback
NZ$56.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Cognitive Computing with IBM Watson

Background, Transition, and the Future of Computing

Welcome to the world of Cognitive Computing with IBM Watson. We'll be starting the book by learning the answers to the following questions:

  • What is AI and why do we need AI? Why can't we just use regular, traditional technologies?
  • What are some examples of transitioning from regular technology to new, AI-based technology?
  • Are there some disadvantages to AI technology, and can it be used in a negative fashion?
  • How can I get started developing with IBM Cloud?
  • What do I need in terms of hardware and software to learn through this book?

This book will also take us through some of the ways that the machine learning technology itself can be implemented for similar use cases. This book assumes you're already somewhat tech-savvy and familiar with application development and programming. We'll be going through implementations in Python, because the Watson Developer Cloud provides language-specific SDKs to access the Watson REST APIs and you mostly have a congruent coding experience, even across languages.

In this chapter, we will discuss the following topics:

  • Transitioning from conventional to cognitive computing
  • Limitations of conventional computing
  • Solving conventional computing problems
  • Workings of machine learning
  • Cons of machine learning
  • Introduction to IBM Watson
  • Hardware and software requirements

Transitioning from conventional to cognitive computing

Currently, the world of computing is undergoing a massive shift, turning into a new plane altogether, of machine learning technology. This is the new necessity due to the massive rise in data, its complexity, and the availability of more and more computing power.

This new computing paradigm is all about finding patterns in data so complex that its problems were so far deemed to be unsolvable by computers—problems that are trivial to humans, even children, such as natural language understanding and playing games, such as chess and Go. A new kind of algorithm was needed to understand data the way a biological neural network does. This new algorithm or solution is computing, which is known as cognitive computing.

IBM realized the potential in machine learning even before it went mainstream, and created Watson, a set of tools that we, the developers, can use in our applications to incorporate cognitive computing without the manual implementation of that technology.

Limitations of conventional computing

Traditionally, computers have been good at one thing, and that is mathematical logic. They're amazing at processing mathematical operations at a rate many orders of magnitude faster than any human could ever be, or will ever be, able to. However, that in itself is a huge problem, as computers have been designed in such a way that they can't work with data if we can't express the algorithm in a set of mathematical operations that actually understands that data.

Therefore, tasks that humans find simple, such as understanding natural languages, visual, and auditory information, are practically impossible for computers to perform. Why? Well, let's take a look at the sentence I shot an elephant in my pyjamas.

What does that sentence mean? Well, if you were to think about it, you'd say that it means a person, clad in his pyjamas, is taking a photograph of the elephant. However, the sentence is ambiguous; we may assume questions such as, Is the elephant wearing the pyjamas?, and Is the human hunting the elephant? There are many different ways that we could interpret this.

However, if we take into account the fact that the person mentions that this is Tom, and that Tom is a photographer, then we know that pyjamas are usually associated with humans and that elephants and animals in general don't usually wear clothes. We can then understand the sentence the way it's meant to be understood.

The contextual resolution that went behind understanding the sentence is something that comes naturally to us humans. Natural language is something we're built to be great at understanding and it's quite literally encoded within our Forkhead box Protein P2 (FOXP2) gene; it's an innate ability of ours.

There's proof that natural language is encoded within our genes, even down to the way it's structured. Even if different languages were developed from scratch by different cultures in complete isolation from one another, they have the same, very basic, underlying structure, such as nouns, verbs, and adjectives.

But there's a problem, there's a (sometimes unclear) difference between knowledge and understanding. For example, when we ride a bike, we know how to ride a bike, but we don't necessarily understand how to ride a bike. All of the balancing, the gyroscopic movement, and tracking is a very complex algorithm that our brain runs on, without even realizing it, when we ride a bike. If we were to ask someone to write all the mathematical operations that go behind riding a bike, it would be next to impossible for them to do so, unless they're a physicist. You can find out more about this algorithm, the distinction between knowledge and understanding, how the human mind adapts, and more, with this video by SmarterEveryDay on YouTube: https://www.youtube.com/watch?v=MFzDaBzBlL0.

Similarly, we know how to understand natural language, but we don't completely understand the extremely complex algorithm that goes behind understanding it.

Since we don't understand that complex algorithm, we cannot express it mathematically and, hence, computers cannot understand natural language data, until we provide them the algorithms to do so.

Similar logic applies to visual data and auditory data, or practically any other kind of information that we, as humans, are naturally good at recognizing, but are simply unable to create algorithms for.

There are also some cases in which humans and computers can't work well with data. In a majority of the cases, this would be high-diversity tabular data with many features. A great example of this kind of data is fraud detection data, in which we have lots of features, location, price, category of purchase, and time of day, just to name a few. At the same time, however, there is a lot of diversity. Someone could buy a plane ticket once a year for a vacation, but it wouldn't be a fraudulent purchase as it was made by the owner of the card with a clear intention.

Because of the high diversity, high feature count, and the fact that it's better to be safe than sorry when it comes to this kind of fraud detection, there are numerous points at which a user could get frustrated while working with this system. A real-life example is when I was trying to order an iPhone on the launch day. As this was a very rushed ordeal, I tried to add my card to Apple Pay beforehand. Since I was trying to add my card to Apple Pay with a different verification method than the default, my card provider's algorithm thought someone was committing fraud and locked down my account. Fortunately, I still ended up getting it on launch day, using another card.

In other cases, these systems end up failing altogether, especially when we employ social engineering tricks, such as connecting with other humans on a personal level and psychologically tricking them into trusting us to get into people's accounts.

Solving conventional computing's problems

To solve computing problems, we use machine learning (ML) technology.

However, we need to remember one distinction between machine learning and artificial intelligence (AI).

By the very bare-bone definitions, AI is a term for replicating organic, or natural, intelligence (that is, the human mind) within a computer. Up until now, this has been an impossible feat due to numerous technical and physical limitations.

However, the term AI is usually confused with many other kinds of systems. Usually, the term is used for any computer system that displays the ability to do something that we thought required human intelligence.

For example, the IBM DeepBlue is the machine that played and won chess against the world champion, Garry Kasparov, in 1997.  This is not artificial intelligence as it doesn't understand how to play chess; nor does it learn how to play the game. Rather, humans hardcode the rules of chess, and the algorithm plays like this:

  • For this current chess board, what are all the possible moves I could make?
  • For all of those boards, what are all the moves my opponent could make?
  • For all of those boards, what are all the possible moves that I could make?

It'll do that over and over, until it has a tree of almost every chess board possible in this game. Then, it chooses the move that, in the end, has the least likelihood of losing, and the highest likelihood of winning for the computer.

You can call this a rule-based system, and it's a stark contrast from what AI truly is.

On the other hand, a specific type of AI, ML, gets much closer to what we think of as AI. We like to define it as creating mathematical models that transform input data into predictions. Imagine being able to represent the method through how you can determine whether a set of pixels contains a cat or dog!

In essence, instead of us humans trying our best to quantify different concepts into mathematical algorithms, the machine can do it for us. The theory is that it's a set of math that can adapt to any other mathematical function, when given enough time, energy, and data.

A perfect example of machine learning in action is IBM's DeepQA algorithm which went behind Watson when it played and won Jeopardy!, Watson played on the game show against the two best human competitors on the game show, namely Ken Jennings and Brad Rutter. Jeopardy, is a game with puns, riddles, and wordplay in each clue—clues such as This trusted friend was the first non-dairy powdered creamer.

If we were to analyze this from a naive perspective, we'd realize that the word friend, which is usually associated with humans, simply cannot be related to a creamer, which has the attributes the first, powdered, and non-dairy. However, if you were to understand the wordplay behind it, you'd realize the answer is What is coffee mate?, since mate means trusted friend, and coffee mate was the first non-dairy powdered creamer.

Therefore, machine learning is essentially a set of algorithms which, when combined with even more systems, such as rule-based systems one could, theoretically, help us simulate the human mind within a computer. Whether or not we'll get there is another discussion altogether, considering the physical limitations around the hardware and architecture of the computers themselves. However, we believe that not only will we not reach this stage, but it's something we wouldn't want to do in the first place.

Workings of machine learning

ML is still an umbrella term—there are many different ways in which we can implement it, namely, K-means clustering, logistic regression, linear regression, support vector machines, and many more. In this book, we'll be mainly focusing on one type of machine learning, that is, artificial neural networks (ANNs).

ANN, or neural networks for short, are a set of techniques, some of which can be referred to as deep learning. It is a type of machine learning algorithm that is, at a very high-level, inspired by the structure of our biological nervous systems. By high-level, we mean that the algorithms are nowhere near to being the same. As a matter of fact, we barely understand how our nervous system learns in the first place. But even the part that was inspired by our nervous system, its structure, is still primitive. While your brain may have hundreds of different kinds of neurons arranged in a type of a web with over 100 trillion synapses, ANNs, so far, only have a handful of different kinds of neurons arranged in a layered formation, and have, at most, a few hundred million artificial synapses.

Machine learning algorithms, including ANNs, learn in the following two ways:

  • Supervised learning: This method of learning allows the machine to learn by example. The computer is shown numerous input-output pairs, and it learns how to map input to output, even if it has never seen a certain input before. Since supervised learning systems require input and output to learn mappings, it's typically more difficult to collect data for these systems. If you'd like to train a supervised learning system to detect cats and dogs in photos, you'd need to have massive, hand-labeled datasets of images of cats and dogs and train the algorithm.
  • Unsupervised learning: This method of learning allows the machine to learn entirely on its own. It's only shown a certain set of data, and tries to learn representations that fit the data, and can then represent new data that it has never seen before. Due to the fact that only input data is required, the method of data collection for unsupervised learning is typically easier. You'll see some examples toward the end of the book.

You can also combine these methods into a semi-supervised machine learning method, but it depends on the individual use case.

Machine learning and its uses 

The machine learning technology surrounds our everyday lives, even when we don't realize it. In the following section, we can see a few examples of how ML makes our everyday lives easier:

  • Netflix: Whenever you watch a certain show on Netflix, it's constantly learning about you, your profile, and the types of shows you like to watch. Out of its database of available movies and shows, it can recommend certain ones that it practically knows that you're going to like.
  • Amazon: Right as you view, search for, or buy a product, Amazon's open source DSSTNE AI is tracking you, and will try to recommend new products that you may want to buy. Plus, it won't just recommend similar products that are in the same category or by the same brand, but it'll get down to the intricate details in suggesting those products to you, such as what others bought after viewing this product, and the specifications of those products.
  • Siri: Nowadays, Apple's Siri isn't just a personal assistant; it analyzes practically everything you do on your phone to make your life more efficient. It'll recommend apps that you may want to launch right on the lock screen, Face ID enables 3D facial recognition in an instant on the Neural Engine (mobile neural network ASIC); and Siri shortcuts will now predict applications that you may want to open, or other media that you may want to take a look at.
  • Tesla Autopilot: When you get on the highway in your Tesla car, your hands are probably no longer on the steering wheel, because you let autopilot take over. Using AI, your car is able to drive itself more safely than any other human ever could, by maintaining a specific preset distance between your car and the next.

Cons of machine learning 

The big bad machine is taking over! This is simply untrue. In fact, this is why IBM doesn't talk about this tech as artificial intelligence but rather as augmented intelligence. It's a method of computing that extends our cognitive ability, and enhances our reasoning capabilities, whereas artificial intelligence sounds a lot more like a true, simulated intelligence.

Whenever the term AI is used in this book, we're referring to augmented intelligence, unless otherwise stated.

There are two reasons why the majority of people believe that machine learning is here to take over humanity: namely, due to bias, and lack of understanding. 

The bare-bones principles of AI have existed for long before most of us were even born. However, even as those principles came about, and before we truly understood what AI can and can't do, people started writing books and producing movies about computers taking over (for example, The Terminator, HAL, and more). This is the bias piece, which makes it hard for people to take out of their minds before they look at the reality of the technology—what machines can and cannot do from an architectural standpoint in the first place.

Also from the surface, AI looks like a very complex technology. All the mathematics and algorithms that go behind it look like a magical black box to most people. Because of this lack of understanding, people succumb to the aforementioned bias.

The primary fear that the general public has of AI is certainly the singularity, which is the point of no return, wherein AI becomes self-aware, conscious in a way and so intelligent that it's able to transcend to another level in which we can't understand what it does or why it does it. However, with the current fundamentals of computing itself, this result is simply impossible. Let's see why this is impossible with the following example.

Even as humans, we technically aren't conscious; it's only an illusion created by the very complex way our brain processes, saves, and refers back to information. Take this example: we all think that we process information by perceiving it. We look at an object and consciously perceive it, and that perception allows us or our consciousness to process it. However, this isn't true.

Let's say that we have a blind person with us. We ask them Are you blind?, and of course they'd say Yes, since they can consciously perceive that, and because they can't see. So far, this fits the hypothesis that most people have, as stated previously.

However, let's say we have a blind person with Anton-Babinski syndrome and we ask them Are you blind? and they affirm that they can see. Then we ask them How many fingers am I holding up? and they then reply with a random number. We ask them why they replied with that random number, and they then confabulate a response. Seems weird, doesn't it?

The question that arises is this: if the person can consciously realize that they can't see, then why don't they realize they're blind? There are some theories, the prevailing one stating that the visual input center of the brain isn't telling the rest of the brain anything at all. It's not even telling the brain that there is no visual input! Because of this, the rest of the neural network in the brain gets confused. This proves that there's a separation, a clear distinction, between the part of the brain that deals with the processing of information, and the part that deals with the conscious perception of that information—or, at least, forms that illusion of perception.

We can learn more on the Anton-Babinski syndrome at the following link:  (https://en.wikipedia.org/wiki/Anton%E2%80%93Babinski_syndrome).

And here's a link to a YouTube video from Vsauce that talks about consciousness and what it truly is: (https://www.youtube.com/watch?v=qjfaoe847qQ).

And, of course, the entire Vsauce LEANBACK: (https://www.youtube.com/watch?v=JoR0bMohcNo&list=PLE3048008DAA29B0A).

There's even more evidence that hints toward the fact that consciousness isn't truly what we think of it: the theory of mind.

You may have heard of Koko the Gorilla. She was trained on sign language, so she could communicate with humans. However, researchers noticed something very interesting in Koko and other animals that were trained to communicate with humans: they don't ask questions.

This is mostly because animals don't have a theory of mind—while they may be self-aware, they aren't aware of the awareness: they aren't meta-cognizant. They don't realize that others also have a separate awareness and mind. This is an ability that, so far, we've only seen in humans.

In fact, some very young humans who are under four years old don't display this theory of mind. It's usually tested with the Sally-Anne test. It goes a little something like this:

  1. The child is shown two dolls. Their names are Sally and Anne.
  2. Sally and Anne are in a room. Sally has a basket, and Anne has a box.
  3. Sally has a marble, and she puts it in the basket.
  4. Sally goes for a walk outside.
  5. Anne takes the marble from Sally's basket, and puts it in her own box.
  6. Sally comes back from her walk, and she wants her marble. Where would Sally look for it?

If the child answers with the box, then they don't have that theory of mind. They don't realize that Sally and Anne (the dolls in this case) have separate minds, points of view. If they answer with the basket, then they realize that Sally doesn't know that Anne moved the marble from the basket to the box; they have a theory of mind.

When you put all of this together, it really starts to seem that consciousness, in the way that we think about it, really doesn't exist. It only exists as an extremely complex illusion put together by various factors, including memory and sense of time, language, self-awareness, and infinitely recursive meta-cognition, which is basically thinking about the thought itself, in an infinite loop.

To add on top of that, we don't understand how our brains are able to piece together such complex illusions in the first place. We also have to realize that any problems that we face with classical computing, due to the very fundamentals of computing itself, will apply here as well. We're dealing with math - not fundamental quantum information. Math is a human construct, built to understand, formally recognize, agree upon, and communicate the rules of the universe we live in. Realizing this, if we were to write down every single mathematical operation behind an ANN, and over the process of decades, go through the results manually on paper, would you consider the paper, the pen, or the calculator, conscious? We'd say not! So then, why would we consider an accelerated version of this, on the computer, as conscious, or being capable of self-awareness?

There is one completely rational fear of machine learning though, that humans themselves will train the computer to do negative things. This is true and it will happen. There is no way to regulate the usage of this technology. It's a set of math or an algorithm and if you ban the usage of it, someone will just implement it from scratch and use their own implementation. It's like banning the usage and purchase of guns, swords, or fire, but then, people will build their own. It's just that building a gun may be very difficult, but building AI is relatively easier, thanks to the vast amount of source code, research papers, and more that have already been published on the internet.

However, we have to trust that, similar to all other technologies that humans have developed, ML will be used for good, bad, and to prevent people from using it for bad as well. People will use ML to create cyber threats that disguise themselves from anti-viruses, but then AI systems can detect those cyber threats in turn, by using ML.

We've seen that people have used and will continue to use ML to create fake videos of people doing whatever they want them to. For example, start-ups like Lyrebird create fake audio, and other startups create fake videos of Barack Obama saying anything they want him to say. However, there are still very subtle patterns that let us detect whether a video is real or fake patterns that humans and conventional algorithms simply cannot detect, but ML technology can.

Introduction to IBM Watson

If what you've read so far piques your interest, then welcome aboard! For, in this book, you won't be learning the actual, raw algorithms that go behind the tech. Rather, you'll be getting a much simpler introduction to the world of AI—through IBM Watson.

Watson, in this context, is a set of REST APIs that are available in the IBM Cloud that enables you to create cognitive applications, without the complex, expensive, and long process of developing cognitive from scratch. But there's more!

Let's begin!

Hardware and software requirements

Now, let's talk about how you can set up your environment to work with ML technology.

One of the key aspects to using a cloud-based service, such as Watson, is that you don't need to own any of the intense hardware that usually goes behind deep learning, or machine learning, systems. Everything's done in the cloud for you, and it is billed based on how much of, or how long, you use the services and machines for.

Therefore, there are no strict hardware requirements.

This book will deal mainly with Python 3.7.2, so it's preferable to have a POSIX-compliant (Unix-like) OS, but Windows will also work, preferably Windows Subsystem for Linux (WSL).

Signing up for IBM Cloud

Now that you're ready, let's sign up for IBM Cloud. To begin with, you don't have to pay for IBM Cloud, or even provide your credit card information. Using the IBM Cloud Lite tier, you can use most services for free!

While it is quite self-explanatory, here's a list of steps to sign up for IBM Cloud:

  1. Head over to https://www.ibm.com/cloud
  2. Hit the Sign up for IBM Cloud button
  3. Fill in all the required information
  4. Verify your email address by clicking the link sent to your email
  5. Once you're in IBM Cloud, give a name to your brand new space and organization

An organization is a set of spaces relevant to a certain company or entity, and a space is a set of services or applications relevant to a project.

Summary 

There we go—once you've created an IBM Cloud account, you should be ready for the next steps. After completing this chapter, you should be able to understand what machine learning is and how it can be used to make use of gold mines of structured and unstructured data that, so far, were deemed useless. We have also learned about the limitations of conventional computing and machine learning. We got a basic understanding of what IBM Watson is and what the necessary hardware and software requirement's are and we learned how to sign up for IBM Cloud.

In the next chapter, we will learn how to apply machine learning through IBM Watson.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Work with IBM Watson APIs to build efficient and powerful cognitive apps
  • Build smart apps to carry out different sets of activities with the help of real-world use cases
  • Get well-versed with the best practices of IBM Watson and implement them in your daily work

Description

Cognitive computing is rapidly becoming a part of every aspect of our lives through data science, machine learning (ML), and artificial intelligence (AI). It allows computing systems to learn and keep on improving as the amount of data in the system increases. This book introduces you to a whole new paradigm of computing – a paradigm that is totally different from the conventional computing of the Information Age. You will learn the concepts of ML, deep learning (DL), neural networks, and AI with the help of IBM Watson APIs. This book will help you build your own applications to understand, and solve problems, and analyze them as per your needs. You will explore various domains of cognitive computing, such as NLP, voice processing, computer vision, emotion analytics, and conversational systems. Equipped with the knowledge of machine learning concepts, how computers do their magic, and the applications of these concepts, you’ll be able to research and apply cognitive computing in your projects.

Who is this book for?

If you’re new to cognitive computing, you’ll find this book useful. Although not a prerequisite, some knowledge of artificial intelligence and deep learning will be an added advantage. This book covers these concepts using IBM Watson’s tools.

What you will learn

  • Get well-versed with the APIs provided by IBM Watson on IBM Cloud
  • Understand ML, AI, cognitive computing, and neural network principles
  • Implement smart applications in fields such as healthcare, entertainment, security, and more
  • Explore unstructured data using cognitive metadata with the help of Natural Language Understanding
  • Discover the capabilities of IBM Watson's APIs by using them to create real-life applications
  • Delve into various domains of cognitive computing, such as media analytics, embedded deep learning, computer vision, and more

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 30, 2019
Length: 256 pages
Edition : 1st
Language : English
ISBN-13 : 9781788478984
Vendor :
IBM
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Apr 30, 2019
Length: 256 pages
Edition : 1st
Language : English
ISBN-13 : 9781788478984
Vendor :
IBM
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 193.97
IBM Watson Projects
NZ$71.99
Cognitive Computing with IBM Watson
NZ$56.99
Hands-On Machine Learning with IBM Watson
NZ$64.99
Total NZ$ 193.97 Stars icon

Table of Contents

10 Chapters
Background, Transition, and the Future of Computing Chevron down icon Chevron up icon
Can Machines Converse Like Humans? Chevron down icon Chevron up icon
Computer Vision Chevron down icon Chevron up icon
This Is How Computers Speak Chevron down icon Chevron up icon
Expecting Empathy from Dumb Computers Chevron down icon Chevron up icon
Language - How Watson Deals with NL Chevron down icon Chevron up icon
Structuring Unstructured Content Through Watson Chevron down icon Chevron up icon
Putting It All Together with Watson Chevron down icon Chevron up icon
Future - Cognitive Computing and You Chevron down icon Chevron up icon
Another Book You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(4 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Kiran Bajwa Sep 23, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A great read for anyone new to IBM Watson, or those familiar with it and want to build on their foundational knowledge.
Amazon Verified review Amazon
Maddie Standerfer Sep 21, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Awesome book authored by @RobHigh and @Tanmay. The book is very thorough and it helped me grasp the concepts of cognitive computing. It's a complex topic that they break down through an understandable journey.Having access to the code samples within GitHub is key and helped me put the knowledge into action right away!
Amazon Verified review Amazon
William C. Dec 09, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I'm so happy I got this book! It gives a great explanation with practical examples on AI, ML and Data Science. Moreover, the reader is able to use IBM Watson's APIs to create real life applications, this was a very exciting part of the book. Anybody interested or wanting to upskill in Cognitive Computing, should have this book!
Amazon Verified review Amazon
Chandra Andhe Sep 09, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very well written book by @Tanmay, I like the style of writing and the tone. It is easier to understand the complicated topic.Code samples provided are good and looks promising. I will be looking more deeply into the code examples but it is a great start. It is a bonus that the code is going to be updated on the GIT and will be kept up to date.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.