In Manhattan, nearly 15,000 Taxis make around 30 journeys each, per day. That’s nearly half a million paid trips. The yellow cabs are part of the never ending, slow progression of vehicles which churn through the streets of New York. The good news is, after a century of worsening traffic, congestion is about to be ameliorated, at least to a degree.
Researchers at MIT announced this week, that they have developed an algorithm to optimise the way taxis find their customers. Their product is allegedly so efficient, it can reduce the required number of cabs (for now, the ones with human drivers) in Manhattan, by a third. That’s a non trivial improvement. The trick, apparently, is to use the cabs as a hustler might cue the ball in Pool – lining the next pick up to start where the last drop off ended.
The technology behind the improvement offered by the MIT research team, is the same one that is behind most of the incredible technology news stories of the last 3 years – Artificial Intelligence. AI is now a part of most of the digital interactions we have. It fuels the recommendation engines in YouTube, Spotify and Netflix. It shows you products you might like in Google’s search results and on Amazon’s homepage.
Undoubtedly, AI is the hot topic of the time – as you cannot possibly have failed to notice.
AI was, until recently, a long forgotten scientific curiosity, employed seriously only in Sci-Fi movies. The technology fell in to a ‘Winter’– a time when AI related projects couldn’t get funding and decision makers had given up on the technology - in the late 1980s. It was at that time that much of the fundamental work which underpins today’s AI, concepts like neural networks and backpropagation were codified.
Artificial Intelligence is now enjoying a rebirth. Almost every new idea funded by Venture Capitalists has AI baked in. The potential excites business owners, especially those involved in the technology sphere, and scares governments in equal measure. It offers better profits and the potential for mass unemployment as if they are two sides of the same coin.
Is is a one in a generation technology improvement, similar to Air Conditioning, mass produced motor car and the smartphone, in that it can be applied to all aspects of the economy at the same time. Just as the iPhone has propelled telecommunications technology forward, and created billions of dollars of sales for phone companies selling mobile data plans, AI is fueling totally new businesses and making existing operations significantly more efficient.
Behind the fanfare associated with AI, however, lies a simple truth. Today’s AI algorithms use what’s called ‘narrow’ or ‘domain specific’ intelligence. In simple terms, each current AI implementation is specific to the job it is given. IBM trained their AI system ‘Watson’, to beat human contestants at ‘Jeopardy!’ When Google want to build an ‘AI product’ that can be used to beat a living counterpart at the Chinese board game ‘Go’, they create a new AI system. And so on. A new task requires a new AI system.
Now, one of the people behind those original concepts from the 1980s, which underpin today’s AI solutions is back with an even bigger idea which might push AI forward.
Judea Pearl, Chancellor's professor of computer science and statistics at UCLA, and a distinguished visiting professor at the Technion, Israel Institute of Technology was awarded the Turing Award 30 years ago. This award was given to him for the Bayesian mathematical models, which gave modern AI its strength. Pearl’s fundamental contribution to computer science was in providing the logic and decision making framework for computers to operate under uncertainty. Some say it was he who provided the spark which thawed that AI winter. Today, he laments the current state of AI, concerned that the field has evolved very little in the last 3 decades since his important theory was presented.
Pearl likens current AI implementations to simple tools which can tell you what’s likely to come next, based on the recognition of a familiar pattern. For example, a medical AI algorithm might be able to look at X-Rays of a human chest and ‘discern’ that the patient has, or does not have, lung cancer based on patterns it has learnt from its training datasets. The AI in this scenario doesn’t ‘know’ what lung cancer is or what a tumor is. Importantly, it is a very long way from understanding that smoking can cause the affliction.
What’s needed in AI next, says Pearl, is a critical difference: AIs which are evolved to the point where they can determine not just what will happen next, but what will cause it. It’s a fundamental improvement, of the same magnitude as his earlier contributions.
Causality – what Pearl is proposing - is one of the most basic units of scientific thought and progress. The ability to conduct a repeatable experiment, showing that A caused B, in multiple locations and have independent peers review the results is one of the fundamentals of establishing truth. In his most recent publication, ‘The Book Of Why’, Pearl outlines how we can get AI, from where it is now, to where it can develop an understanding of these causal relationships. He believes the first step is to cement the building blocks of reality – ‘what is a lung’, ‘what is smoke’ and that we’ll be able to do in the next 10 years.
Geoff Hinton’s was the mind behind backpropagation, another of the fundamental technologies which has brought AI to the point it is at today. To progress AI, however, he says we might have to start all over again.
Hinton has developed (and produced two papers for the University of Toronto to articulate) a new way of training AI systems, involving something he calls ‘Capsule Networks’ – a concept he’s been working on for 30 years, in an effort to improve the capabilities of the backpropagation algorithms he developed.
Capsule networks operate in a manner similar to the human brain. When we see an image, our brains breaks it down to it’s components and processes them in parallel. Some brain neurons recognise edges through contrast differences. Others look for corners by examining the points at which edges intersect. Capsule Networks are similar, several acting on a picture at one time, identifying, for example, an ear or a nose on an animal, irrespective of the angle from which it is being viewed.
This is a big deal as until now, CNNs (convolution neural networks), the set of AI algorithms that are most often used in image and video recognition systems, could recognize images as well as humans do. CNNs, however, find it hard to recognize images if their angle is changed.
It’s too early to judge whether capsule networks are the key to the next step in the AI revolution, but in many tasks, Capsule Networks are identifying images faster and more accurately than current capabilities allow.
Andrew Ng is the co-inventor of Google Brain, the team and project that Alphabet put together in 2011 to explore Artificial Intelligence. He now works for Baidu, China’s most successful search engine – analogous in size and scope to Google in the rest of the world. At the moment, he heads up Baidu’s Silicon Valley AI research facility. Beyond concerns over potential job displacement caused by AI, an issue so significant he says it is perhaps all we should be thinking about when it comes to Artificial Intelligence, he suggests that, in the future, the most progress will be made when AI systems can team themselves without human involvement.
At the moment, training an AI, even on something that, to us is simple, such as what a cat looks like, is a complicated process. The procedure involves ‘supervised learning.’ It’s shown a lot of pictures (when they did this at Google, they used 10 million images), some of which are cats - labelled appropriately by humans. Once a sufficient level of ‘education’ has been undertaken, the AI can then accurately label cats, most of the time.
Ng thinks supervision is problematic, he describes it as having an Achilles heel in the form of the quantity of data that is required. To go beyond current capabilities, says Ng, will require a completely new type of technology – one which can learn through ‘unsupervised learning’ - machines learning from data that has not been classified by humans.
Progress on unsupervised learning is slow. At both Baidu and Google, engineers are focussing on constrained versions of unsupervised learning such as training AI systems to learn about a human face and then using them to create a face themselves. The activity requires that the AI develops what we would call an ‘internal representation’ of a face – something which is required in any unsupervised learning.
Other avenues to train without supervision include, ingeniously, pitting an AI system against a computer game – an environment in which they receive feedback (through points awarded in the game) for ‘constructive’ activities, but within which they are not taught directly by a human.
Artificial Intelligence, as it stands will deliver economy wide efficiency improvements, the likes of which we have not seen in decades. It seems incredible to think that the field is still in its infancy when it can deliver such substantial benefits – like reduced traffic congestion, lower carbon emissions and saved time in New York Taxis. But it is.
Isaac Azimov who developed his own concepts behind how Artificial Intelligence might be trained with simple rules said “Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won't come in.”
The author should rest assured. Between them, Pearl, Hinton and Ng are each taking revolutionary approaches to elevate AI beyond even the incredible heights it has reached, and starting without reference to the concepts which have brought us this far.
5 polarizing Quotes from Professor Stephen Hawking on artificial intelligence
Toward Safe AI – Maximizing your control over Artificial Intelligence
Decoding the Human Brain for Artificial Intelligence to make smarter decisions