Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

My friend, the robot: Artificial Intelligence needs Emotional Intelligence

Save for later
  • 8 min read
  • 21 Feb 2018

article-image
Tommy’s a brilliant young man, who loves programming. He’s so occupied with computers that he hardly has any time for friends. Tommy programs a very intelligent robot called Polly, using Artificial Intelligence, so that he has someone to talk to. One day, Tommy gets hurt real bad about something and needs someone to talk to. He rushes home to talk to Polly and pours out his emotions to her. To his disappointment, Polly starts giving him advice like she does for any other thing. She doesn’t understand that he needs someone to “feel” what he’s feeling rather than rant away on what he should or shouldn’t be doing. He naturally feels disconnected from Polly.

My Friend doesn’t get me

Have you ever wondered what it would be like to have a robot as a friend? I’m thinking something along the lines of Siri. Siri’s pretty good at holding conversations and is quick witted too. But Siri can’t understand your feelings or emotions, neither can “she” feel anything herself. Are we missing that “personality” from the artificial beings that we’re creating?

Even if you talk about chatbots, although we gain through convenience, we lose the emotional aspect, especially at a time when expressive communication is the most important.

Do we really need it?

I remember watching the Terminator, where Arnie asks John, “Why do you cry?” John finds it difficult to explain to him, why humans cry. The fact is though, that the machine actually understood there was something wrong with the human, thanks to the visual effects associated with crying. We’ve also seen some instances of robots or AI analysing sentiment through text processing as well. But how accurate is this? How would a machine know when a human is actually using sarcasm? What if John was faking it and could cry at the drop of a hat or he just happened to be chopping onions? That’s food for thought now.

On the contrary, you might wonder though, do we really want our machines to start analysing our emotions? What if they take advantage of our emotional state?

Well, that’s a bit of a far fetched thought and what we need to understand is that it’s necessary for robots to gauge a bit of our emotions to enhance the experience of interacting with them. There are several wonderful applications for such a technology. For instance, Marketing organisations could use applications that detect users facial expressions when they look at a new commercial to gauge their “interest”. It could also be used by law enforcement as a replacement to the polygraph. Another interesting use case would be to help autism affected individuals understand the emotions of others better. The combination of AI and EI could find a tonne of applications right from cars that can sense if the driver is tired or sleepy and prevent an accident by pulling over, to a fridge that can detect if you’re stressed and lock itself, to prevent you from binge eating!

Recent Developments in Emotional Intelligence

There are several developments happening from the past few years, in terms of building systems that understand emotions. Pepper, a Japanese robot, for instance, can tell feelings such as joy, sadness and anger, and respond by playing you a song. A couple of years ago, Microsoft released a tool, the Emotion API, that could breakdown a person’s emotions based only on their picture. Physiologists, Neurologists and Psychologists, have collaborated with engineers to find measurable indicators of human emotion that can be taught to computers to look out for.

There are projects that have attempted to decode facial expressions, the pitch of our voices, biometric data such as heart rate and even our body language and muscle movements.

Bronwyn van der Merwe, General Manager of Fjord in the Asia Pacific region revealed that big companies like Amazon, Google and Microsoft are hiring comedians and script writers in order to harness the human-like aspect of AI by inducing personality into their technologies. Jerry, Ellen, Chris, Russell...are you all listening?

How it works

Almost 40% of our emotions are conveyed through tone of voice and the rest is read through facial expressions and gestures we make. An enormous amount of data is collected from media content and other sources and is used as training data for algorithms to learn human facial expressions and speech. One type of learning used is Active Learning or human-assisted machine learning. This is a kind of supervised learning, where the learning algorithm is able to interactively query the user to obtain new data points or an output. Situations might exist where unlabeled data is plentiful but manually labeling the data is expensive. In such a scenario, learning algorithms can query the user for labels. Since the algorithm chooses the examples, the number of examples to learn a concept turns out to be lower than what is required for usual supervised learning.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Another approach is to use Transfer Learning, a method that focuses on storing the knowledge that’s gained while solving one problem and then applying it to a different but related problem. For example, knowledge gained while learning to recognize fruits could apply when trying to recognize vegetables. This works by analysing a video for facial expressions and then transfering that learning to label speech modality.

What’s under the hood of these machines?

Powerful robots that are capable of understanding emotions would most certainly be running Neural Nets under the hood. Complementing the power of these Neural Nets are beefy CPUs and GPUs on the likes of the Nvidia Titan X GPU and Intel Nervana CPU chip. Last year at NIPS, amongst controversial body shots and loads of humour filled interactions, Kory Mathewson and Piotr Mirowski entertained audiences with A.L.Ex and Pyggy, two AI robots that have played alongside humans in over 30 shows. These robots introduce audiences to the “comedy of speech recognition errors” by blabbering away to each other as well as to humans. Built around a Recurrent Neural Network that’s trained on dialogue from thousands of films, A.L.Ex. communicates with human performers, audience participants, and spectators through speech recognition, voice synthesis, and video projection.

A.L.E.x is written in Torch and Lua code and has a word vocabulary of 50,000 words that have been extracted from 102,916 movies and it is built on an RNN with Long-Short Term Memory architecture and 512 dimensional layers.

The unconquered challenges today

The way I see it, there are broadly 3 challenge areas that AI powered robots face in this day:

  1. Rationality and emotions: AI robots need to be fed with initial logic by humans, failing which, they cannot learn on their own. They may never have the level of rationality or the breadth of emotions to take decisions the way humans do.
  2. Intuition, strategic thinking and emotions: Machines are incapable of thinking into the future and taking decisions the way humans can. For example, not very far into the future, we might have an AI powered dating application that measures a subscriber’s interest level while chatting with someone. It might just rate the interest level lower, if the person is in a bad mood due to some other reason. It wouldn’t consider the reason behind the emotion and whether it was actually linked to the ongoing conversation.
  3. Spontaneity, empathy and emotions: It may be years before robots are capable of coming up with a plan B, the way humans do. Having a contingency plan and implementing it in an emotional crisis is something that AI fails at accomplishing. For example, if you’re angry at something and just want to be left alone, your companion robot might just follow what you say without understanding your underlying emotion, while an actual human would instantly empathise with your situation and rather try to be there for you.

Bronwyn van der Merwe said, "As human beings, we have contextual understanding and we have empathy, and right now there isn't a lot of that built into AI. We do believe that in the future, the companies that are going to succeed will be those that can build into their technology that kind of an understanding".

What’s in store for the future

If you ask me, right now we’re on the highway to something really great. Yes, there are several aspects that are unclear about AI and robots making our lives easier vs disrupting them, but as time passes, science is fitting the pieces of the puzzle together to bring about positive changes in our lives. AI is improving on the emotional front as I write, although there are clearly miles to go. Companies like Affectiva are pioneering emotion recognition technology and are working hard to improve the way AI understands human emotions. Biggies like Microsoft had been working on bringing in emotional intelligence into their AI since before 2015 and have come a long way since then. Perhaps, in the next Terminator movie, Arnie might just comfort a weeping Sarah Connor, saying, “Don’t cry, Sarah dear, he’s not worth it”, or something of the sort.

As a parting note and just for funsies, here’s a final question for you, “Can you imagine a point in the future when robots have such high levels of EQ, that some of us might consider choosing them as a partner over humans?”