Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Artificial Intelligence

170 Articles
article-image-facebook-plans-to-use-bloomsbury-ai-to-fight-fake-news
Pravin Dhandre
30 Jul 2018
3 min read
Save for later

Facebook plans to use Bloomsbury AI to fight fake news

Pravin Dhandre
30 Jul 2018
3 min read
“Our investments in AI mean we can now remove more bad content quickly because we don't have to wait until after it's reported. It frees our reviewers to work on cases where human expertise is needed to understand the context or nuance of a situation. In Q1, for example, almost 90% of graphic violence content that we removed or added a warning label to was identified using AI. This shift from reactive to proactive detection is a big change -- and it will make Facebook safer for everyone.” Mark Zuckerberg, in Facebook’s earnings, call on Wednesday this week To understand the significance of the above statement, we must first look at the past. Last year, Social media giant Facebook suffered from multiple lawsuits across the UK, Germany, and US for defamation due to fake news articles and for spreading misleading information. To make amends, Facebook came up with fake news identification tools, however, failed to completely tame the effects of bogus news. In fact, the company’s revenue took a bad hit in advertising revenue along with its social reputation nosediving. Early this month, Facebook confirmed the acquisition of Bloomsbury AI, a London-based artificial intelligence start-up with over 60 patents acquired to date. Bloomsbury AI focuses on natural language processing - developing machine reading methods that can understand written text across a broad range of domains. The Artificial Intelligence team at Facebook would be on-boarding the complete team of Bloomsbury AI and will build highly robust methods to kill the plague of fake news throughout the Facebook platform. The rich expertise carried over by the Bloomsbury AI team will strengthen Facebook's endeavor in natural language processing research and gauge deeper understanding of natural language and its applications. It appears that the amalgamation will help Facebook to develop advanced machine reading, reasoning and question answering methods which will boost the Facebook’s NLP engine to understand the legitimacy of questions across a broad range of topics and make intellect choices thereby defeating the challenges of fake news and Autobots. No doubt, Facebook is going to leverage the Bloomsbury’s Cape service to answer a majority of the questions on unstructured text. The duo would play a significant role in parsing the content majorly to tackle fake photos and videos too. In addition, it has been said that the new team members would provide an active contribution to the ongoing artificial intelligence projects such as AI hardware chips, AI technology mimicking humans and many more. Facebook is investigating data analytics firm Crimson Hexagon over misuse of data Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Did Facebook just have another security scare?
Read more
  • 0
  • 0
  • 3612

article-image-what-is-interactive-machine-learning
Amey Varangaonkar
23 Jul 2018
4 min read
Save for later

What is interactive machine learning?

Amey Varangaonkar
23 Jul 2018
4 min read
Machine learning is a useful and effective tool to have when it comes to building prediction models or to build a useful data structure from an avalanche of data. Many ML algorithms are in use today for a variety of real-world use cases. Given a sample dataset, a machine learning model can give predictions with only certain accuracy, which largely depends on the quality of the training data fed to it. Is there a way to increase the prediction accuracy by somehow involving humans in the process? The answer is yes, and the solution is called as ‘Interactive Machine Learning’. Why we need interactive machine learning As we already discussed above, a model can give predictions only as good as the quality of the training data fed to it. If the quality of the training data is not good enough, the model might: Take more time to learn and then give accurate predictions Quality of predictions will be very poor This challenge can be overcome by involving humans in the machine learning process. By incorporating human feedback in the model training process, it can be trained faster and more efficiently to give more accurate predictions. In the widely adopted machine learning approaches, including supervised and unsupervised learning or even active learning for that matter, there is no way to include human feedback in the training process to improve the accuracy of predictions. In case of supervised learning, for example, the data is already pre-labelled and is used without any actual inputs from the human during the training process. For this reason alone, the concept of interactive machine learning is seen by many machine learning and AI experts as a breakthrough. How interactive machine learning works Machine Learning Researchers Teng Lee, James Johnson and Steve Cheng have suggested a novel way to include human inputs to improve the performance and predictions of the machine learning model. It has been called as the ‘Transparent Boosting Tree’ algorithm, which is a very interesting approach to combine the advantages of machine learning and human inputs in the final decision making process. The Transparent Boosting Tree, or TBT in short, is an algorithm that would visualize the model and the prediction details of each step in the machine learning process to the user, take his/her feedback, and incorporate it into the learning process. The ML model is in charge of updating the assigned weights to the inputs, and filtering the information shown to the user for his/her feedback. Once the feedback is received, it can be incorporated by the ML model as a part of the learning process, thus improving it. A basic flowchart of the interactive machine learning process is as shown: Interactive Machine Learning More in-depth information on how interactive machine learning works can be found in their paper. What can Interactive machine learning do for businesses With the rising popularity and applications of AI across all industry verticals, humans may have a key role to play in the learning process of an algorithm, apart from just coding it. While observing the algorithm’s own outputs or evaluations in the form of visualizations or plain predictions, humans can suggest way to to improve that prediction by giving feedback in the form of inputs such as labels, corrections or rankings. This helps the models in two ways: Increases the prediction accuracy Time taken for the algorithm to learn is shortened considerably Both the advantages can be invaluable to businesses, as they look to incorporate AI and machine learning in their processes, and look for faster and more accurate predictions. Interactive Machine Learning is still in its nascent stage and we can expect more developments in the domain to surface in the coming days. Once production-ready, it will undoubtedly be a game-changer. Read more Active Learning: An approach to training machine learning models efficiently Anatomy of an automated machine learning algorithm (AutoML) How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 12800

article-image-how-rolls-royce-is-applying-ai-and-robotics-for-smart-engine-maintenance
Sugandha Lahoti
20 Jul 2018
5 min read
Save for later

How Rolls Royce is applying AI and robotics for smart engine maintenance

Sugandha Lahoti
20 Jul 2018
5 min read
Rolls Royce has been working in the civil aviation domain for quite some time now, to build what they call as ‘intelligent engines’. The IntelligentEngine vision was first announced at the Singapore Airshow in February 2018. The idea was built around how robotics could be used to revolutionise the future of engine maintenance. Rolls Royce aims to build engines which are: Connected, using cloud based nodes and IoT devices with other engines of the fleet, as well as with the customers and operators. Contextually aware, of its operations, constraints, and customers, with modern data analysis and big data mining techniques. Comprehending, of its own experiences and other engines in the fleet using state-of-the-art machine learning and recommendation algorithms. The company has been demonstrating steady progress and showing off their rapidly developing digital capabilities. Using tiny SWARM robots for engine maintenance Their latest inventions are, tiny roach-sized ‘SWARM’ robots, capable of crawling inside airplane engines and fix them. They look like they’ve just crawled straight out of a Transformers movie. This small robot, almost 10mm in size can perform a visual inspection of hard to reach airplane engine parts. The devices will be mounted with tiny cameras providing a live video feed to allow engineers to see what’s going on inside an engine without having to take it apart. These swarm robots will be deposited on the engine via another invention, the ‘snake’ robots. Officially called FLARE, these snake robots are flexible enough to travel through an engine, like an endoscope. Source Another group of robots, the INSPECT robots is a network of periscopes permanently embedded within the engine. These bots can inspect engines using periscope cameras to spot and report any maintenance requirements. Current prototypes of these bots are much larger than the desired size and not quite ready for intricate repairs. They may be production ready in almost two years. Reducing flight delays with data analysis R2 Data Labs (Rolls Royce data science department) offers technical insight capabilities to their Airline Support Teams (ASTs). ASTs generally assess incident reports, submitted after disruption events or maintenance is undertaken. The Technical Insight platform will help ASTs easily capture, categorize and collate report data in a single place. This platform builds a bank of high-quality data (almost 10 times the size of the database ASTs had access to previously), and then analyze it to identify trends and common issues for more insightful analytics. The technical insight platform has so far shown positive results and has been critical to achieving the company’s IntelligentEngine vision. According to their blog, it was able to avoid delays and cancellations in a particular operator’s 757 fleet by 30%, worth £1.5m per year. The social network for engines In May 2018, the company launched an engine network app. This app was designed to bring all of the engine data under a single hood, much like how Facebook brings all your friends on a single platform. In this app, all the crucial information regarding all the engines in a fleet is available in a single place. Much like Facebook, each engine has a ‘profile’, which shows data on how it’s been operated, the aircraft it has been paired with, the parts it contains, and how much service life is left in each component. It also has a ‘Timeline’ which shows the complete story of the engine’s operational history. In fact, you also have a ‘newsfeed’ to display the most important insights from across the fleet. Source The engine also has an in-built recommendation algorithm which suggests future maintenance work for individual engines, based on what it learns from other similar engines in the fleet. As Juan Carlos Cabrejas, Technical Product Manager, R2 Data Labs writes, “This capability is essential to our IntelligentEngine vision, as it underpins our ability to build a frictionless data ecosystem across our fleets.” Transforming Engine Health Management Rolls-Royce is taking Engine Health Management (EHM) to a new level of connectivity. Their latest EHM system can measure thousands of parameters and monitor entirely new parts of the engine. And interestingly, the EHM has a ‘talk back’ feature. An operational center can ask the system to focus on one particular part or parameter of the engine. The system listens and responds back with hundreds of hours of information specifically tailored to that request. Axel Voege, Rolls-Royce, Head of Digital Operations, Germany, says” By getting that greater level of detail, instantly, our engineering teams can work out a solution much more quickly.” This new system will go into service next year making it their most IntelligentEngine yet. As IntelligentEngine makes rapid progress, the company sees itself designing, testing, and managing engines entirely through their digital twin in the near future. You can read more about the IntelligentEngine vision and other stories to discover new products and updates at the Rolls Royce site. Unity announces a new automotive division and two-day Unity AutoTech Summit Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 4801
Banner background image

article-image-why-twitter-finally-migrated-to-tensorflow
Amey Varangaonkar
18 Jul 2018
3 min read
Save for later

Why Twitter (finally!) migrated to Tensorflow

Amey Varangaonkar
18 Jul 2018
3 min read
A new nest in the same old tree. Twitter have finally migrated to Tensorflow as their preferred choice of machine learning framework. While not many are surprised by this move given the popularity of Tensorflow, many have surely asked the question - ‘What took them so long?’ Why Twitter migrated to Tensorflow only now Ever since its inception, Twitter have been using their trademark internal system called as DeepBird. This system was able to utilize the power of machine learning and predictive analytics to understand user data, drive engagement and promote healthier conversations. DeepBird primarily used Lua Torch to power its operations. As the support for the language grew sparse due to Torch’s move to PyTorch, Twitter decided it was high time to migrate DeepBird to support Python as well - and started exploring their options. Given the rising popularity of Tensorflow, it was probably the easiest choice Twitter had to make for some time. Per the recently conducted Stack Overflow Developer Survey 2018, Tensorflow is the most loved framework by the developers, with almost 74% of the respondents showing their loyalty towards it. With Tensorflow 2.0 around the corner, the framework promises to build on its existing capabilities by adding richer machine learning features with cross-platform support - something Twitter will be eager to get the most out of. How does Tensorflow help Twitter? After incorporating Tensorflow into DeepBird, Twitter were quick to share some of the initial results. Some of the features that stand out are: Higher engineer productivity - With the help of Tensorboard and some internal data viz tools such as Model Repo, it has become a lot easier for Twitter engineers to observe the performance of the models and tweak them to obtain better results. Easier access to Machine Learning - Tensorflow simplified machine learning models which can be integrated with other technology stacks due to the general-purpose nature of Python. Better performance - The overall performance of DeepBird v2 was found to be better than its predecessor which was powered by Lua Torch. Production-ready models - Twitter plan to develop models that can be integrated to the workflow with minimal issues and bugs, as compared to other frameworks such as Lua Torch. With Tensorflow in place, Twitter users can expect their timelines to be full of relatable, insightful and high quality interactions which they can easily be a part of. Tweets will be shown to readers based on their relevance, and Tensorflow will be able to predict how a particular user will react to them. A large number of heavyweights have already adopted Tensorflow as their machine learning framework of choice  - eBay, Google, Uber, Dropbox, and Nvidia being some of the major ones. As the list keeps on growing, one can only wonder which major organization will be next on the list. Read more TensorFlow 1.9.0-rc0 release announced Python, Tensorflow, Excel and more – Data professionals reveal their top tools Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 4721

article-image-what-you-missed-at-last-weeks-icml-2018-conference
Sugandha Lahoti
18 Jul 2018
6 min read
Save for later

What you missed at last week’s ICML 2018 conference

Sugandha Lahoti
18 Jul 2018
6 min read
The 35th International Conference on Machine Learning (ICML) 2018, took place on July 10, 2018 - July 15, 2018 in Stockholm, Sweden. ICML is one of the most anticipated conferences for every data scientist and ML practitioner and features some of the best ML researchers who come to talk about their research and discuss new ideas. It won’t be wrong to say that Deep learning and its subsets were the showstopper of this conference with a large number of research papers and AI professionals implementing it in their methods. These included sessions and paper presentations on, Gaussian Processes, -Networks and Relational Learning, Time-Series Analysis, Deep Bayesian Non-parametric Tracking, Generative Models, etc. Also, other deep learning subsets such as Representation Learning, Ranking and Preference Learning, Supervised Learning, Transfer and Multi-Task Learning, etc were heavily featured. The conference consisted of one day of tutorials (July 10), followed by three days of main conference sessions (July 11-13), followed by two days of workshops (July 14-15). Best Talks and Seminars of ICML 2018 ICML 2018 featured two informative talks dealing with the applications of Artificial Intelligence in other domains. Day 1 was inaugurated by an invited talk from Prof. Dawn Song on “AI and Security: Lessons, Challenges and Future Directions’’. She talked about the impact of AI in computer security, differential privacy techniques, and the synergy between AI, computer security, and blockchain. She also gave an overview of challenges and new techniques to enable privacy-preserving machine learning. Day 3 featured an inaugural talk by Max Welling on “Intelligence per  Kilowatt hour”, focusing on the connection between physics and AI. According to Max, in the coming future, companies will find it too expensive to run energy absorbing ML tools to power their AI engines, or the heat dissipation in edge devices will be too high to be safe. So the next frontier of AI is going to be finding the most energy efficient combination of hardware and algorithms. There were also two plenary talks. Language to Action: towards Interactive Task Learning with Physical Agents, by Joyce Chai and Building Machines that Learn and Think Like People by Josh Tenenbaum. Best Research Papers of ICML 2018 Among the many interesting research papers that were submitted to the ICML 2018 conference, here are the winners. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples by Anish Athalye, Nicholas Carlini, and David Wagner was lauded and bestowed with the Best Paper award. The paper identifies obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. They identify the three different types of obfuscated gradients and develop attack techniques to overcome them. Delayed Impact of Fair Machine Learning by Lydia T. Liu, Sarah Dean, Esther Rolf, and Max Simchowitz also got the Best Paper award. This paper examines the circumstances where fairness criteria promotes the long-term well-being of disadvantaged groups, measured in terms of a temporal variable of interest. The paper also introduces a one-step feedback model of decision-making that exposes how decisions change the underlying population over time. Bonus: The Test of Time award Day 4 witnessed Facebook researchers Ronan Collobert and Jason Weston receiving the honorary ‘Test of Time award’ for their 2008 ICML paper, A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. The paper proposed a single convolutional neural network that takes a sentence and outputs it’s language processing predictions. So the network can identify and distinguish part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. At the time of the paper publishing there was almost no neural networks research in Natural Language Processing. The paper’s use of word embeddings and how they are trained, the use of auxiliary tasks and multitasking, and the use of convolutional neural nets in NLP, really inspired the neural networks of today. For instance, Facebook’s recent machine translation and summarization tool Fairseq uses CNNs for language. AllenNLP’s Elmo learns improved word embeddings via a neural net language model and applies them to a large number of NLP tasks. Featured Tutorials at ICML 2018 The ICML 2018 featured a total of 9 tutorials in sets of 3 each. All the tutorials took place on Day 1. These included: Imitation Learning by Yisong Yue and Hoang M Le where they gave a broad overview of imitation learning techniques and its recent applications. Learning with Temporal Point Processes by Manuel Gomez Rodriguez and Isabel Valera. They talk about temporal point processes in machine learning from basics to advanced concepts such as marks and dynamical systems with jumps. Machine Learning in Automated Mechanism Design for Pricing and Auctions by Nina Balcan, Tuomas Sandholm, and Ellen Vitercik. This tutorial covered automated mechanism design for revenue maximization. Toward Theoretical Understanding of Deep Learning by Sanjeev Arora where he explained about what kind of theory may ultimately arise for deep learning with examples. Defining and Designing Fair Algorithms by Sam Corbett-Davies and Sharad Goel. They illustrated the problems that lie at the foundation of algorithmic fairness, drawing on ideas from machine learning, economics, and legal theory. Understanding your Neighbors: Practical Perspectives From Modern Analysis by Sanjoy Dasgupta and Samory Kpotufe. This tutorial aimed to cover new perspectives on k-NN, and translate new theoretical insights to a broader audience. Variational Bayes and Beyond: Bayesian Inference for Big Data by Tamara Broderick where she covered modern tools for fast, approximate Bayesian inference at scale. Machine Learning for Personalised Health by Danielle Belgrave and Konstantina Palla. This tutorial evaluated the current drivers of machine learning in healthcare and present machine learning strategies for personalised health. Optimization Perspectives on Learning to Control by Benjamin Recht where he showed how to learn models of dynamical systems, how to use data to achieve objectives in a timely fashion, how to balance model specification etc. Workshops at ICML 2018 Day 5 and 6 of the ICML 2018 conference were dedicated entirely for Workshops based on topics ranging from AI in health to AI in computational psychology to Humanizing AI to AI for Wildlife Conservation. Some other workshops included Bridging the Gap between Human and Automated Reasoning Data Science meets Optimization Domain Adaptation for Visual Understanding Eighth International Workshop on Statistical Relational AI Enabling Reproducibility in Machine Learning MLTrain@RML Engineering Multi-Agent Systems Exploration in Reinforcement Learning Federated AI for Robotics Workshop (F-Rob-2018) This is just a brief overview of the ICML conference, where we have handpicked a select few paper presentations and invited talks. You can see the full schedule along with the list of selected research papers at the ICML website. 7 of the best machine learning conferences for the rest of 2018 Microsoft start AI School to teach Machine Learning and Artificial Intelligence Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 3349

article-image-meet-the-whos-who-of-reinforcement-learning
Fatema Patrawala
12 Jul 2018
7 min read
Save for later

Meet the who's who of Reinforcement learning

Fatema Patrawala
12 Jul 2018
7 min read
Reinforcement learning is a branch of artificial intelligence that deals with an agent that perceives the information of the environment in the form of state spaces and action spaces and acts on the environment thereby resulting in a new state and receiving a reward as feedback for that action. This received reward is assigned to the new state. Just like when we had to minimize the cost function in order to train our neural network, here the reinforcement learning agent has to maximize the overall reward to find the optimal policy to solve a particular task. This article is an extract from the book Reinforcement Learning with TensorFlow.  How is reinforcement learning different from supervised and unsupervised learning? In supervised learning, the training dataset has input features, X, and their corresponding output labels, Y. A model is trained on this training dataset, to which test cases having input features, X', are given as the input and the model predicts Y'. In unsupervised learning, input features, X, of the training set are given for the training purpose. There are no associated Y values. The goal is to create a model that learns to segregate the data into different clusters by understanding the underlying pattern and thereby, classifying them to find some utility. This model is then further used for the input features X' to predict their similarity to one of the clusters. Reinforcement learning is different from both supervised and unsupervised. Reinforcement learning can guide an agent on how to act in the real world. The interface is broader than the training vectors, like in supervised or unsupervised learning. Here is the entire environment, which can be real or a simulated world. Agents are trained in a different way, where the objective is to reach a goal state, unlike the case of supervised learning where the objective is to maximize the likelihood or minimize cost. Reinforcement learning agents automatically receive the feedback, that is, rewards from the environment, unlike in supervised learning where labeling requires time-consuming human effort. One of the bigger advantages of reinforcement learning is that phrasing any task's objective in the form of a goal helps in solving a wide variety of problems. For example, the goal of a video game agent would be to win the game by achieving the highest score. This also helps in discovering new approaches to achieving the goal. For example, when AlphaGo became the world champion in Go, it found new, unique ways of winning. A reinforcement learning agent is like a human. Humans evolved very slowly; an agent reinforces, but it can do that very fast. As far as sensing the environment is concerned, neither humans nor and artificial intelligence agents can sense the entire world at once. The perceived environment creates a state in which agents perform actions and land in a new state, that is, a newly-perceived environment different from the earlier one. This creates a state space that can be finite as well as infinite. The largest sector interested in this technology is defense. Can reinforcement learning agents replace soldiers that not only walk, but fight, and make important decisions? Basic terminologies and conventions The following are the basic terminologies associated with reinforcement learning: Agent: This we create by programming such that it is able to sense the environment, perform actions, receive feedback, and try to maximize rewards. Environment: The world where the agent resides. It can be real or simulated. State: The perception or configuration of the environment that the agent senses. State spaces can be finite or infinite. Rewards: Feedback the agent receives after any action it has taken. The goal of the agent is to maximize the overall reward, that is, the immediate and the future reward. Rewards are defined in advance. Therefore, they must be created properly to achieve the goal efficiently. Actions: Anything that the agent is capable of doing in the given environment. Action space can be finite or infinite. SAR triple: (state, action, reward) is referred as the SAR triple, represented as (s, a, r). Episode: Represents one complete run of the whole task. Let's deduce the convention shown in the following diagram: Every task is a sequence of SAR triples. We start from state S(t), perform action A(t) and thereby, receive a reward R(t+1), and land on a new state S(t+1). The current state and action pair gives rewards for the next step. Since, S(t) and A(t) results in S(t+1), we have a new triple of (current state, action, new state), that is, [S(t),A(t),S(t+1)] or (s,a,s'). Pioneers and breakthroughs in reinforcement learning Here are the pioneers, industrial leaders, and research breakthroughs in the field of deep reinforcement learning. David Silver Dr. David Silver, with an h-index of 30, heads the research team of reinforcement learning at Google DeepMind and is the lead researcher on AlphaGo. David co-founded Elixir Studios and then completed his PhD in reinforcement learning from the University of Alberta, where he co-introduced the algorithms used in the first master-level 9x9 Go programs. After this, he became a lecturer at University College London. He used to consult for DeepMind before joining full-time in 2013. David lead the AlphaGo project, which became the first program to defeat a top professional player in the game of Go. Pieter Abbeel Pieter Abbeel is a professor at UC Berkeley and was a Research Scientist at OpenAI. Pieter completed his PhD in Computer Science under Andrew Ng. His current research focuses on robotics and machine learning, with a particular focus on deep reinforcement learning, deep imitation learning, deep unsupervised learning, meta-learning, learning-to-learn, and AI safety. Pieter also won the NIPS 2016 Best Paper Award. Google DeepMind Google DeepMind is a British artificial intelligence company founded in September 2010 and acquired by Google in 2014. They are an industrial leader in the domains of deep reinforcement learning and a neural turing machine. They made news in 2016 when the AlphaGo program defeated Lee Sedol, 9th dan Go player. Google DeepMind has channelized its focus on two big sectors: energy and healthcare. Here are some of its projects: In July 2016, Google DeepMind and Moorfields Eye Hospital announced their collaboration to use eye scans to research early signs of diseases leading to blindness In August 2016, Google DeepMind announced its collaboration with University College London Hospital to research and develop an algorithm to automatically differentiate between healthy and cancerous tissues in head and neck areas Google DeepMind AI reduced the Google's data center cooling bill by 40% The AlphaGo program As mentioned previously in Google DeepMind, AlphaGo is a computer program that first defeated Lee Sedol and then Ke Jie, who at the time was the world No. 1 in Go. In 2017 an improved version, AlphaGo zero was launched that defeated AlphaGo 100 games to 0. Libratus Libratus is an artificial intelligence computer program designed by the team led by Professor Tuomas Sandholm at Carnegie Mellon University to play Poker. Libratus and its predecessor, Claudico, share the same meaning, balanced. In January 2017, it made history by defeating four of the world's best professional poker players in a marathon 20-day poker competition. Though Libratus focuses on playing poker, its designers mentioned its ability to learn any game that has incomplete information and where opponents are engaging in deception. As a result, they have proposed that the system can be applied to problems in cybersecurity, business negotiations, or medical planning domains. You enjoyed an excerpt on Reinforcement learning and got to know about breakthrough research in this field. If you want to leverage the power of reinforcement learning techniques, grab our latest edition Reinforcement Learning with TensorFlow. Top 5 tools for reinforcement learning How to implement Reinforcement Learning with TensorFlow How to develop a stock price predictive model using Reinforcement Learning and TensorFlow
Read more
  • 0
  • 0
  • 2516
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-reinvents-speech-recognition-and-machine-translation-with-ai
Amey Varangaonkar
04 Jul 2018
4 min read
Save for later

How Amazon is reinventing Speech Recognition and Machine Translation with AI

Amey Varangaonkar
04 Jul 2018
4 min read
In the recently held AWS Summit in San Francisco, Amazon announced the general availability of two of its premium offerings - Amazon Transcribe and Amazon Translate. What’s so special about the two products is that customers will now be able to see the power of Artificial Intelligence in action, and use it to solve their day-to-day problems. These offerings from AWS will make it easier for startups and companies looking to adopt and integrate AI into their existing process and simplify their core tasks - especially pertaining to speech and language processing. Effective speech-to-text conversion with Amazon Transcribe In the AWS summit keynote, Amazon Solutions Architect Niranjan Hira expressed his excitement talking about the features of Amazon Transcribe; the automatic speech recognition service by AWS. This API can be integrated with the other tools and services offered by Amazon such as Amazon S3, and Quicksight. Source: YouTube Amazon Transcribe boasts wonderful features like: Simple API: It is very easy to use the Transcribe API to perform speech to text conversion, with minimum need for programming. Timestamp generation: The speech when converted to text also includes the timestamps for every word, so that tracking the word becomes easy and hassle-free. Variety of use-cases: The Transcribe API can be used to generate accurate transcripts of any audio or video file, of varied quality. Subtitle generation becomes easier using this API especially for low-quality audio recordings - customer service calls are a very good example. Easy to read text: Transcribe uses the cutting edge deep learning technology to parse text from speech without any errors. With appropriate punctuations and grammar in place, the transcripts are very easy to read and understand. Machine translation simplified with Amazon Translate Amazon Translate is a machine translation service offered by Amazon. It makes use of neural networks and advanced deep learning techniques to deliver accurate, high-quality translations. Key features of Amazon Translate include: Continuous training: The architecture of this service is built in such a way that the neural networks keep learning and improving. High accuracy: The continuous learning by the translation engines from new and varied datasets results in a higher accuracy of machine translations. The machine translation capability offered by this service is almost 30% more efficient than human translation. Easy to integrate with other AWS services: With a simple API call, Translate allows you to integrate the service within third party applications to allow real-time translation capabilities. Highly scalable: Regardless of the volume, Translate does not compromise the speed and accuracy of the machine translation. Know more about Amazon Translate from Yoni Friedman’s keynote at the AWS Summit. With all the businesses slowly migrating to cloud, it is clear all the cloud vendors - mainly Amazon, Google and Microsoft - are doing everything they can to establish their dominance. Google recently launched Cloud ML for GCP which offers machine learning and predictive analytics services improving businesses. Microsoft’s Azure Cognitive Services offer effective machine translation services as well, and are slowly gaining a lot of momentum. With these releases, the onus was on Amazon to respond, and they have done so in style. With the Transcribe and Translate APIs, Amazon’s goal of making it easier for startups and small-scale businesses to adopt AWS and incorporate AI seems to be on track. These services will also help AWS distinguish their cloud offerings, given that computing and storage resources are provided by rivals as well. Read more Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence Amazon is selling facial recognition technology to police
Read more
  • 0
  • 0
  • 4092

article-image-machine-learning-apis-for-google-cloud-platform
Amey Varangaonkar
28 Jun 2018
7 min read
Save for later

Machine learning APIs for Google Cloud Platform

Amey Varangaonkar
28 Jun 2018
7 min read
Google Cloud Platform (GCP) is considered to be one of the Big 3 cloud platforms among Microsoft Azure and AW. GCP is widely used cloud solutions supporting AI capabilities to design and develop smart models to turn your data into insights at a cheap, affordable cost. The following excerpt is taken from the book 'Cloud Analytics with Google Cloud Platform' authored by Sanket Thodge. GCP offers many machine learning APIs, among which we take a look at the 3 most popular APIs: Cloud Speech API A powerful API from GCP! This enables the user to convert speech to text by using a neural network model. This API is used to recognize over 100 languages throughout the world. It can also support filter of unwanted noise/ content from a text, under various types of environments. It supports context-awareness recognition, works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise Robustness, Inappropriate Content Filtering and supports for integration with other APIs of GCP.  The architecture of the Cloud Speech API is as follows: In other words, this model enables speech to text conversion by ML. The components used by the Speech API are: REST API or Google Remote Procedure Call (gRPC) API Google Cloud Client Library JSON API Python Cloud DataLab Cloud Data Storage Cloud Endpoints The applications of the model include: Voice user interfaces Domotic appliance control Preparation of structured documents Aircraft / direct voice outputs Speech to text processing Telecommunication It is free of charge for 15 seconds per usage, up to 60 minutes per month. More than that will be charged at $0.006 per usage. Now, as we have learned about the concepts and the applications of the model, let's learn some use cases where we can implement the model: Solving crimes with voice recognition: AGNITIO, A voice biometrics specialist partnered with Morpho (Safran) to bring Voice ID technology into its multimodal suite of criminal identification products. Buying products and services with the sound of your voice: Another most popular and mainstream application of biometrics, in general, is mobile payments. Voice recognition has also made its way into this highly competitive arena. A hands-free AI assistant that knows who you are: Any mobile phone nowadays has voice recognition software in the form of AI machine learning algorithms. Cloud Translation API Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become the main focus of NLP group for many years. MT deals with translating text from the source language to text in the target language. Cloud Translation API provides a graphical user interface to translate an inputted string of a language to targeted language, it’s highly responsive, scalable and dynamic in nature. This API enables translation among 100+ languages. It also supports language detection automatically with accuracy. It provides a feature to read a web page contents and translate to another language, and need not be text extracted from a document. The Translation API supports various features such as programmatic access, text translation, language detection, continuous updates and adjustable quota, and affordable pricing. The following image shows the architecture of the translation model:  In other words, the cloud translation API is an adaptive Machine Translation Algorithm. The components used by this model are: REST API Cloud DataLab Cloud data storage Python, Ruby Clients Library Cloud Endpoints The most important application of the model is the conversion of a regional language to a foreign language. The cost of text translation and language detection is $20 per 1 million characters. Use cases Now, as we have learned about the concepts and applications of the API, let's learn two use cases where it has been successfully implemented: Rule-based Machine Translation Local Tissue Response to Injury and Trauma We will discuss each of these use cases in the following sections. Rule-based Machine Translation The steps to implement rule-based Machine Translation successfully are as follows: Input text Parsing Tokenization Compare the rules to extract the meaning of prepositional phrase Find word of inputted language to word of the targeted language Frame the sentence of the targeted language Local tissue response to injury and trauma We can learn about the Machine Translation process from the responses of a local tissue to injuries and trauma. The human body follows a process similar to Machine Translation when dealing with injuries. We can roughly describe the process as follows: Hemorrhaging from lesioned vessels and blood clotting Blood-borne physiological components, leaking from the usually closed sanguineous compartment, are recognized as foreign material by the surrounding tissue since they are not tissue-specific Inflammatory response mediated by macrophages (and more rarely by foreign-body giant cells) Resorption of blood clot Ingrowth of blood vessels and fibroblasts, and the formation of granulation tissue Deposition of an unspecific but biocompatible type of repair (scar) tissue by fibroblasts Cloud Vision API Cloud Vision API is powerful image analytic tool. It enables the users to understand the content of an image. It helps in finding various attributes or categories of an image, such as labels, web, text, document, properties, safe search, and code of that image in JSON. In labels field, there are many sub-categories like text, line, font, area, graphics, screenshots, and points. How much area of graphics involved, text percentage, what percentage of empty area and area covered by text, is there any image partially or fully mapped in web are included web contents. The document consists of blocks of the image with detailed description, properties show that the colors used in image is visualized. If any unwanted or inappropriate content is removed from the image through safe search. The main features of this API are label detection, explicit content detection, logo and landmark detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported for many languages. It does not support face recognition system. The architecture for the Cloud Vision API is as follows: We can summarize the functionalities of the API as extracting quantitative information from images, taking the input as an image and the output as numerics and text. The components used in the API are: Client Library REST API RPC API OCR Language Support Cloud Storage Cloud Endpoints Applications of the API include: Industrial Robotics Cartography Geology Forensics and Military Medical and Healthcare Cost: Free of charge for the first 1,000 units per month; after that, pay as you go. Use cases This technique can be successfully implemented in: Image detection using an Android or iOS mobile device Retinal Image Analysis (Ophthalmology) We will discuss each of these use cases in the following topics. Image detection using Android or iOS mobile device Cloud Vision API can be successfully implemented to detect images using your smartphone. The steps to do this are simple: Input the image Run the Cloud Vision API Executes methods for detection of Face, Label, Text, Web and Document properties Generate the response in the form of phrase or string Populate the image details as a text view Retinal Image Analysis – ophthalmology Similarly, the API can also be used to analyze retinal images. The steps to implement this are as follows: Input the images of an eye Estimate the retinal biomarkers Do the process to remove the effected portion without losing necessary information Identify the location of specific structures Identify the boundaries of the object Find similar regions in two or more images Quantify the image with retinal portion damage You can learn a lot more about the machine learning capabilities of GCP on their official documentation page. If you found the above excerpt useful, make sure you check out our book 'Cloud Analytics with Google Cloud Platform' for more information on why GCP is a top cloud solution for machine learning and AI. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Google announce the largest overhaul of their Cloud Speech-to-Text  
Read more
  • 0
  • 0
  • 4848

article-image-ai-cold-war-between-china-and-the-usa
Neil Aitken
28 Jun 2018
6 min read
Save for later

The New AI Cold War Between China and the USA

Neil Aitken
28 Jun 2018
6 min read
The Cold War between the United States and Russia ended in 1991. However, considering the ‘behind the scenes’ behavior of the world’s two current Super Powers – China and the USA, another might just be beginning. This time around, many believe that the real battle doesn’t relate to the trade deficit between the two countries, despite new stories detailing the escalation of trade tariffs. In the next decade and a half, the real battle will take place between China and the USA in the technology arena, specifically, in the area of Artificial Intelligence or AI. China’s not shy about it’s AI ambitions China has made clear its goals when it comes to AI. It has publicly announced its plan to be the world leader in Artificial Intelligence by 2030. The country has learned a hard lesson, missing out on previous tech booms, notably, in the race for internet supremacy early this century. Now, they are taking a far more proactive stance. The AI market is estimated to be worth $150 billion per year by 2030, slightly over a decade from now, and China has made very clear public statements that the country wants it all. The US, in contrast has a number of private companies striving to carve out a leadership position in AI but no holistic policy. Quite the contrary, in fact. Trumps government say, “There is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes.” What makes China so dangerous as an AI Threat ? China’s background and current circumstance gives them a set of valuable strategic advantages when it comes to AI. AI solutions are based, primarily, on two things. First, of critical importance is the amount of data available to ‘train’ an AI algorithm and the relative ease or difficulty of obtaining access to it. Secondly, the algorithm which sorts the data, looking for patterns and insights, derived from research, which are used to optimize the AI tools which interpret it. China leads the world on both fronts. China has more data: China’s population is 4 times larger than the US’s giving them a massive data advantage. China has a total of 730 million daily internet users and 704 million smartphone mobile internet users. Each of the connected individuals uses their phone, laptop or tablet online each day. Those digital interactions leave logs of location, time, action performed and many other variables. In sum then, China’s huge population is constantly generating valuable data which can be mined for value. Chinese regulations give public and private agencies easier access to this data: Few countries have exemplary records when it comes to human rights. Both Australia, and the US, for example, have been rebuked by the UN for their treatment of immigration in recent years. Questions have been asked of China too. Some suggest that China’s centralized government, and alleged somewhat shady history when it comes to human rights means they can provide internet companies with more data, more easily, than their private equivalents in the US could dream of. Chinese cybersecurity laws require companies doing business in the country to store their data locally. The government has placed one state representative on the board of each of their major tech companies, giving them direct, unfettered central government influence in the strategic direction and intent of those companies, especially when it comes to coordinating the distribution of the data they obtain. In the US, data leakage is one of the most prominent news stories of 2018. Given Facebook’s presentation to congress around the Facebook/Cambridge Analytica data sharing scandal, it would be hard to claim that US companies have access to data outside each company competing to evolve AI solutions fastest. It’s more secretive: China protects its advantage by limiting other countries’ access to its findings / information related to AI. At the same time, China takes advantage of the open publication of cutting edge ideas generated by scientists in other areas of the world. How China is doubling down on their natural advantage in AI solution development A number of metrics show China’s growing advantage in the area. China is investing more money in the area and leading the world in the number of university led research papers on AI that they’re publishing. China is investing more money in AI than the USA. They overtook the US in AI funds allocation in 2015 and have been increasing investment in the area since. Source: Wall Street Journal China now performs more research in to AI than the US – as measured by the number of published scientific peer reviewed journals. Source: HBR Why ‘Network Effects’ will decide the ultimate winner in the AI Arms Race You won’t see evidence of a Cold War in the behaviors of World Leaders. The handshakes are firm and the visits are cordial. Everybody smiles when they meet at the G8. However, a look behind the curtain clearly shows a 21st Century arms race underway, being led by investments  related to AI in both countries. Network effects ensure that there is often only one winner in a fight for technological supremacy. Whoever has the ‘best product’ for a given application, wins the most users. The data obtained from those users’ interactions with the tool is used to hone its performance. Thus creating a virtuous circle. The result is evident in almost every sphere of tech: Network effects explain why most people use only Google, why there’s only one Facebook and how Netflix has overtaken cable TV in the US as the primary source of video entertainment. Ultimately, there is likely to be only one winner in the war surrounding AI, too. From a military perspective, the advantage China has in its starting point for AI solution development could be the deciding factor. As we’ve seen, China has more people, with more devices, generating more data. That is likely to help the country develop workable AI solutions faster. They ingest the hard won advantages that US data scientists develop and share – but do not share their own. Finally, they simply outspend and out-research the US, investing more in AI than any other country. China’s coordinated approach outpaces the US’s market based solution with every step. The country with the best AI solutions for each application will gain a ‘Winner Takes All’ advantage and the winning hand in the $300 billion game of AI market ownership. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you    
Read more
  • 0
  • 0
  • 4197

article-image-7-popular-applications-of-artificial-intelligence-in-healthcare
Guest Contributor
26 Jun 2018
5 min read
Save for later

7 Popular Applications of Artificial Intelligence in Healthcare

Guest Contributor
26 Jun 2018
5 min read
With the advent of automation, artificial intelligence(AI), and machine learning, we hear about their applications regularly in news across industries. This has been especially true for healthcare where various hospitals, health insurance companies, healthcare units, etc. have been impacted in more substantial and concrete ways by AI when compared to other industries. In the recent years, healthcare startups and life science organizations have ventured into Artificial Intelligence technology and are one of the most heavily invested areas by VCs. Various organizations with ties to healthcare are leveraging the advances in artificial intelligence algorithms for remote patient monitoring, medical imaging and diagnostics, and implementing newly developed sophisticated methods, and applications into the system. Let’s explore some of the most popular AI applications which have revamped the healthcare industry. Proper maintenance and management of medical records Assembling, analyzing, and maintaining medical information and records is one of the most commonly used applications of AI. With the coming of digital automation, robots are being used for collecting and tracing data for proper data management and analysis. This has brought down manual labor to a considerable extent. Computerized medical consultation and treatment path The existence of medical consultation apps like DocsApp allows a user to talk to experienced and specialist doctors on chat or call directly from their phone in a private and secure manner. Users can report their symptoms into the app and this ensures the users are connected to the right specialist physicians as per the user’s medical history. This has been made possible due to the existence of AI systems. AI also aids in treatment design like analyzing data, making notes and reports from a patient’s file, thereby helping in choosing the right customized treatment as per the patient’s medical history. Eliminates monotonous manual labor Various medical tasks like analyzing X-Ray reports, test reports, CT scans and other common tasks can be executed by robots and other mechanical devices more accurately. Radiology is one such discipline wherein human supervision and control have dropped to a substantial level due to the extensive use of AI. Aids in drug manufacture and creation Generally, billions of dollars are spent on developing pharmaceuticals through clinical trials and they take almost a decade or two to manufacture a life-saving drug. But now, with the arrival of AI, the entire drug creation procedure has been simplified and has become pretty reasonable as well. Even in the recent outbreak of the Ebola virus, AI was used for drug discovery, to redesign solutions and to scan the current existing medicines to eradicate the plague. Regular health monitoring In the current era of digitization, there are certain wearable health trackers – like Garmin, Fitbit, etc. which can monitor your heart rate and activity levels. These devices help the user to keep a close check on their health by setting up their exercise plan, or reminding them to stay hydrated. All this information can also be shared with your physician to track your current health status through AI systems. Helps in the early and accurate detection of medical disorders AI helps in spotting carcinogenic and cardiovascular disorders at an early stage and also aids in predicting health issues that people are likely to contract due to hereditary or genetic reasons. Enhances medical diagnosis and medication management Medical diagnosis and medication management are the ultimate data-based problems in the healthcare industry. IBM’s Watson, a deep learning system has simplified medical investigation and is being applied to oncology, specifically for cancer diagnosis. Previously, human doctors used to collect patient data, research on it and conduct clinical trials. But with AI, the manual efforts have reduced considerably. For medication management, certain apps have been developed to monitor the medicines taken by a patient. The cellphone camera in conjunction with AI technology to check whether the patients are taking the medication as prescribed. Further, this also helps in detecting serious medical problems and tracking patients medicine adaptability and participants behavior in certain scientific trials. To conclude, we can connote that we are gradually embarking on the new era of cognitive technology with the power of AI-based systems. In the coming years, we can expect AI to transform every area of the healthcare industry that it brushes up with. Experts are constantly looking for ways and means to organize the existing structure and power up healthcare on the basis of new AI technology. The ultimate goals being to improve patient experience, build a better public health management and reduce costs by automating manual labor. Author Bio Maria Thomas is the Content Marketing Manager and Product Specialist at GreyCampus with eight years rich experience on professional certification courses like PMI- Project Management Professional, PMI-ACP, Prince2, ITIL (Information Technology Infrastructure Library), Big Data, Cloud, Digital Marketing and Six Sigma. Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0
Read more
  • 0
  • 0
  • 3670
article-image-computer-vision-is-an-expanding-market-heres-why
Aaron Lazar
12 Jun 2018
6 min read
Save for later

Computer vision is growing quickly. Here's why.

Aaron Lazar
12 Jun 2018
6 min read
Computer Vision is one of those technologies that has grown in leaps and bounds over the past few years. If you look back 10 years, it wasn’t the case, as CV was more a topic of academic interest. Now, however, computer vision is clearly a driver and benefactor of the renowned Artificial Intelligence. Through this article, we’ll understand the factors that have sparked the rise of Computer Vision. A billion $ market You heard it right! Computer Vision is a billion dollar market, thanks to the likes of Intel, Amazon, Netflix, etc investing heavily in the technology’s development. And from the way events are unfolding, the market is expected to hit a record $ 17 billion, by 2023. That’s at a cumulative growth rate of over 7% per year, from 2018 to 2023. Now this is a joint figure for both the hardware and software components related to Computer Vision. Under the spotlight Let’s talk a bit about a few companies that are already taking advantage of Computer Vision, and are benefiting from it. Intel There are several large organisations that are investing heavily in Computer Vision. Last year, we saw Intel invest $15 Billion towards acquiring Mobileye, an Israeli auto startup. Intel published its findings stating that the autonomous vehicle market itself would rise to $ 7 Trillion by 2050. The autonomous vehicle industry will be one of the largest implementers of computer vision technology. These vehicles will use Computer Vision to “see” their surroundings and communicate with other vehicles. Netflix Netflix on the other hand, is using Computer Vision for more creative purposes. With the rise of Netflix’s original content, the company is investing in Computer Vision to harvest static image frames directly from the source videos to provide a flexible source of raw artwork, which is used for digital merchandising. For example, within a single episode of Stranger Things, there are nearly 86k static video frames, that would had to have been analysed by human teams to identify the most appropriate stills to be featured. This meant first going through each of those 86k images, then understanding what worked for viewers of the previous episode and then applying the learning in the selection of future images. Need I estimate how long that would have taken to do? Now, Computer Vision performs this task seamlessly, with a much higher accuracy than that of humans. Pinterest Pinterest, the popular social networking application, sees millions of images, GIFs and other visuals shared every day. In 2017, they released an application feature callen Lens, that allows users to use their phone’s camera to search for similar looking decor, food and clothing, in the real world. Users can simply point their cameras at an image and Pinterest will show them similar styles and ideas. Recent reports reveal that Pinterest’s revenue has grown by a staggering 58%! National Surveillance in CCTV The world’s biggest AI startup, SenseTime, provides China with the world’s largest and most sophisticated CCTV network. With over 170 Mn CCTV cameras, the government authorities and police departments are able to seamlessly identify people. They perform this by wearing smart glasses, that have facial recognition capabilities. Bring this technology to Dubai and you’ve got a supercop in a supercar! The nation-wide surveillance project that’s named Skynet, began as early as 2005, although recent advances in AI have given it a boost. Reading through discussions like these is real fun. People used to quip that such “fancy” machines are only for the screen. If only they knew that such a machine would be a reality just a few years from then. Clearly, computer vision is one of the most highly valued commercial applications of machine learning and when integrated with AI, it’s an offer only a few can resist! Star Acquisitions that matter Several acquisitions have taken place in the field of Computer Vision in the past two years alone. The most notable of them being Intel’s acquisition of Movidius, to the tune of $400 Mn. Here are some of the others that have happened since 2016: Twitter acquires Magic Pony Technology for $150Mn Snap Inc acquires Obvious Engineering for $47 Mn Salesforce acquires Metamind for $32.8 Mn Google acquires Eyefluence for $21.6 Mn This shows the potential of the computer vision market and how big players are in the race to dive deep into the technology. Three little things driving computer vision I would say there are 3 clear growth factors that are contributing to the rise of Computer Vision: Deep Learning Advancements in Hardware Growth of the Datasets Deep Learning The advancements in the field of Deep Learning are bound to boost Computer Vision. Deep Learning algorithms are capable of processing tonnes of images, much more accurately than humans. Take Feature Extraction for example. The primary pain point with feature extraction is that you have to choose which features to look for in a given image. This becomes cumbersome and almost impossible when the number of classes you are trying to define, starts to grow. There are so many features, that you have to deal with a plethora of parameters, that have to be fine-tuned. Deep Learning simplifies this process for you. Advancements in Hardware With new hardware like GPUs capable of processing petabytes of data, algorithms are capable of running faster and more efficiently. This has led to the advancement in real-time processing and vision capabilities. Pioneering hardware manufacturers like NVIDIA and Intel are in a race to create more powerful and capable hardware to support deep learning capabilities for Computer Vision. Growth of the Datasets Training Deep Learning algorithms isn’t a daunting task anymore. There are plenty of open source data sets that you can choose from to train your algorithms. The more the data, the better is the training and accuracy. Here are some of the most notable data sets for computer vision. ImageNet with 15 million images, is a massive dataset Open Images has 9 million images Microsoft Common Objects in Context (COCO) has around 330K images CALTECH-101  has approximately 9,000 images Where tha money at? The job market for Computer Vision is on a rise too, with Computer Vision featuring at #3 on the list of top jobs in 2018, according to Indeed. Organisations are looking for Computer Vision Engineers who are well versed with writing efficient algorithms for handling large amounts of data. Source: Indeed.com So is it the right time to invest or perhaps learn Computer Vision? You bet it is! It’s clear that Computer Vision is a rapidly growing market and will have a sustained growth for the next few years. If you’re just planning to start out or even if you’re competent in using tools for Computer Vision, here are some resources to help you skill up with popular CV tools and techniques. Introducing Intel’s OpenVINO computer vision toolkit for edge computing Top 10 Tools for Computer Vision Computer Vision with Keras, Part 1
Read more
  • 0
  • 0
  • 4311

article-image-these-are-the-best-machine-learning-conferences-in-2018
Richard Gall
12 Jun 2018
8 min read
Save for later

7 of the best machine learning conferences for the rest of 2018

Richard Gall
12 Jun 2018
8 min read
We're just about half way through the year - scary, huh? But there's still time to attend a huge range of incredible machine learning conferences in 2018. Given that in this year's Skill Up survey developers working every field told us that they're interested in learning machine learning, it will certainly be worth your while (and money). We fully expect this year's machine learning conference circuit to capture the attention of those beyond the analytics world. The best machine learning conferences in 2018 But which machine learning conferences should you attend for the rest of the year? There's a lot out there, and they're not always that cheap. Let's take a look at 10 of the best machine learning conferences for the rest of this year. AI Summit London When and where? June 12-14 2018, Kensington Palace and ExCel Center, London, UK. What is it? AI Summit is all about AI and business - it's as much for business leaders and entrepreneurs as it is for academics and data scientists. The summit covers a lot of ground, from pharmaceuticals to finance to marketing, but the main idea is to explore the incredible ways Artificial Intelligence is being applied to a huge range of problems. Who is speaking? According to the event's website, there are more than 400 speakers at the summit. The keynote speakers include a number of impressive CEOs including Patrick Hunger, CEO of Saxo Bank and Helen Vaid, Global Chief Customer Officer of Pizza Hut. Who's it for? This machine learning conference is primarily for anyone who would like to consider themselves a thought leader. Don't let that put you off though, with a huge number of speakers from across the business world it is a great opportunity to see what the future of AI might look like. ML Conference, Munich When and where? June 18-10, 2018, Sheraton Munich Arabella Park Hotel, Munich, Germany. What is it? Munich's ML Conference is also about the applications of machine learning in the business world. But it's a little more practical-minded than AI Summit - it's more about how to actually start using machine learning from a technological standpoint. Who is speaking? Speakers at ML Conference are researchers and machine learning practitioners. Alison Lowndes from NVIDIA will be speaking, likely offering some useful insight on how NVIDIA is helping make deep learning accessible to businesses; Christian Petters, solutions architect at AWS will also be speaking on the important area of machine learning in the cloud. Who's it for? This is a good conference for anyone starting to become acquainted with machine learning. Obviously data practitioners will be the core audience here, but sysadmins and app developers starting to explore machine learning would also benefit from this sort of machine learning conference. O'Reilly AI Conference, San Francisco When and where? September 5-7 2018, Hilton Union Square, San Francisco, CA. What is it? According to O'Reilly's page for the event, this conference is being run to counter those conferences built around academic AI research. It's geared (surprise, surprise) towards the needs of businesses. Of course, there's a little bit of aggrandizing marketing spin there, but the idea is fundamentally a good one. It's all about exploring how cutting edge AI research can be used by businesses. It's somewhere between the two above - practical enough to be of interest to engineers, but with enough blue sky scope to satisfy the thought leaders. Who is speaking? O'Reilly have some great speakers here. There's someone else making an appearance for NVIDIA - Gaurav Agarwal, who's heading up the company's automated vehicles project. There's also Sarah Bird from Facebook who will likely have some interesting things to say about how her organization is planning to evolve its approach to AI over the years to come. Who is it for? This is for those working at the intersection of business and technology. Data scientists and analysts grappling with strategic business questions, CTOs and CMOs beginning to think seriously about how AI can change their organization will all find something here. O'Reilly Strata Data Conference, New York When and where? September 12-13, 2018, Javits Centre, New York, NY. What is it? O'Reilly's Strata Data Conference is slightly more Big Data focused than its AI Conference. Yes it will look at AI and deep learning, but it's going to tackle those areas from a big data perspective first and foremost. It's more established than the AI Summit (it actually started back in 2012 as Strata + Hadoop World), so there's a chance it will have a slightly more conservative vibe. That could be a good or bad thing, of course. Who is speaking? This is one of the biggest Big Data conferences on the planet, As you'd expect the speakers are from some of the biggest organizations in the world, from Cloudera to Google and AWS. There's a load of names we could pick out, but one we're most excited about is Varant Zanoyan from Airbnb who will be talking about Zipline, Airbnb's new data management platform for machine learning. Who's it for? This is a conference for anyone serious about big data. There's going to be a considerable amount of technical detail here, so you'll probably want to be well acquainted with what's happening in the big data world. ODSC Europe 2018, London When and where? September 19-22, Novotel West, London, UK. What is it? The Open Data Science Conference is very much all about the open source communities that are helping push data science, machine learning and AI forward. There's certainly a business focus, but the event is as much about collaboration and ideas. They're keen to stress how mixed the crowd is at the event. From data scientists to web developers, academics and business leaders, ODSC is all about inclusivity. It's also got a clear practical bent. Everyone will want different things from the conference, but learning is key here. Who is speaking? ODSC haven't yet listed speakers on their website, simply stating on their website "our speakers include some of the core contributors to many open source tools, libraries, and languages". This indicates the direction of the event - community driven, and all about the software behind it. Who's it for? More than any of the other machine learning conferences listed here, this is probably the one that really is for everyone. Yes, it might be a more technical than theoretical, but it's designed to bring people into projects. Speakers want to get people excited, whether they're an academic, app developer or CTO. MLConf SF, San Francisco When and where? November 14 2018, Hotel Nikko, San Francisco, CA. What is it? MLConf has a lot in common with ODSC. The focus is on community and inclusivity rather than being overtly corporate. However, it is very much geared towards cutting edge research from people working in industry and academia - this means it has a little more of a specialist angle than ODSC. Who is speaking? At the time of writing, MLConf are on the look out for speakers. If you're interested, submit an abstract - guidelines can be found here. However, the event does have Uber's Senior Data Science Manager Franzisca Bell scheduled to speak, which is sure to be an interesting discussion on the organization's current thinking and challenges with huge amounts of data at its disposal. Who's it for? This is an event for machine learning practitioners and students. Level of expertise isn't strictly an issue - an inexperienced data analyst could get a lot from this. With some key figures from the tech industry there will certainly be something for those in leadership and managerial positions too. AI Expo, Santa Clara When and where? November 28-29, 2018, Santa Clara Convention Center, Santa Clara, CA. What is it? Santa Clara's AI Expo is one of the biggest machine learning conferences. With four different streams, including AI technologies, AI and the consumer, AI in the enterprise, and Data analytics for AI and IoT, the event organizers are really trying to make their coverage pretty comprehensive. Who is speaking? The event's website boasts 75+ speakers. The most interesting include Elena Grewel, Airbnb's Head of Data Science, Matt Carroll, who leads developer relations at Google Assistant, and LinkedIn's Senior Director of Dara Science, Xin Fu. Who is it for? With so much on offer this has wide appeal. From marketers to data analysts, there's likely to be something on offer. However, with so much going on you do need to know what you want to get out of an event like this - so be clear on what AI means to you and what you want to learn. Did we miss an important machine learning conference? Are you attending any of these this year? Let us know in the comments - we'd love to hear from you.
Read more
  • 0
  • 0
  • 2911

article-image-5-javascript-machine-learning-libraries-you-need-to-know
Pravin Dhandre
08 Jun 2018
3 min read
Save for later

5 JavaScript machine learning libraries you need to know

Pravin Dhandre
08 Jun 2018
3 min read
Technologies like machine learning, predictive analytics, natural language processing and artificial intelligence are the most trending and innovative technologies of 21st century. Whether it is an enterprise software or a simple photo editing application, they all are backed and rooted in machine learning technology making them smart enough to be a friend to humans. Until now, the tools and frameworks that were capable of running machine learning were majorly developed in languages like Python, R and Java. However, recently the web ecosystem has picked up machine learning into its fold and is achieving transformation in web applications. Today in this article, we will look at the most useful and popular libraries to perform machine learning in your browser without the need of softwares, compilers, installations and GPUs. TensorFlow.js GitHub: 7.5k+ stars With the growing popularity of TensorFlow among machine learning and deep learning enthusiasts, Google recently released TensorFlowjs, the JavaScript version of TensorFlow. With this library, JavaScript developers can train and deploy their machine learning models faster in browser without much hassle. This library is speedy, tensile, scalable and a great start to practically experience the taste of machine learning. With TensorFlow.js, importing existing models and retraining pretrained model is a piece of cake. To check out examples on tensorflow.js, visit GitHub repository. ConvNetJS GitHub: 9k+ stars ConvNetJS provides neural networks implementation in JavaScript with numerous demos of neural networks available on GitHub repository. The framework has a good number of active followers who are programmers and coders. The library provides support to various neural network modules, and popular machine learning techniques like Classification and Regression. Developers who are interested in getting reinforcement learning onto the browser or in training complex convolutional networks, can visit the ConvNetJS official page. Brain.js GitHub: 8k+ stars Brain.js is another addition to the web development ecosystem that brings smart features onto the browser with just a few lines of code. Using Brain.js, one can easily create simple neural networks and can develop smart functionality in their browser applications without much of the complexity. It is already preferred by web developers for client side applications like in-browser games or placement of Ads, or for character recognition. You can checkout its GitHub repository to see a complete demonstration of approximating XOR function using brain.js. Synaptic GitHub: 6k+ stars Synaptic is a well-liked machine learning library for training recurrent neural networks as it has in-built architecture-free generalized algorithm. Few of the in-built architectures include multilayer perceptrons, LSTM networks and Hopfield networks. With Synaptic, you can develop various in-browser applications such as Paint an Image, Learn Image Filters, Self-Organizing Map or Reading from Wikipedia. Neurojs GitHub: 4k+ stars Another recently developed framework especially for reinforcement learning tasks in your browser, is neurojs. It mainly focuses on Q-learning, but can be used for any type of neural network based task whether it is for building a browser game or an autonomous driving application. Some of the exciting features this library has to offer are full-stack neural network implementation, extended support to reinforcement learning tasks, import/export of weight configurations and many more. To see the complete list of features, visit the GitHub page. How should web developers learn machine learning? NVIDIA open sources NVVL, library for machine learning training Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 5716
article-image-top-languages-for-artificial-intelligence-development
Natasha Mathur
05 Jun 2018
11 min read
Save for later

Top languages for Artificial Intelligence development

Natasha Mathur
05 Jun 2018
11 min read
Artificial Intelligence is one of the hottest technologies currently. From work colleagues to your boss, chances are that most (yourself included) wish to create the next big AI project. Artificial Intelligence is a vast field and with thousands of languages to choose from, it can get a bit difficult to pick the language that will bring the most value to your project. For anyone wanting to dive in the AI space, the initial stage of choosing the right language can really decelerate the development process. Moreover, making a right choice about the language for the Artificial Intelligence development depends on your skills and needs. Following are the top 5 programming languages for Artificial Intelligence development: 1.Python Python, hands down, is the number one programming language when it comes to Artificial Intelligence development. Not only is it one of the most popular languages in the field of data science, machine learning, and Artificial Intelligence in general, it is also popular among game developers, web developers, cybersecurity professionals and others. It offers a ton of libraries and frameworks in Machine Learning and Deep Learning that are extremely powerful and essential for AI development such as TensorFlow, Theano, Keras, Scikit Learn, etc. Python is the go-to language for AI development for most people, novices and experts alike. Pros It’s quite easy to learn due to its simple syntax. This helps in implementing the AI algorithms in a quick and easy manner. Development is faster in Python as compared to Java, C++ or Ruby. It is a multi-paradigm programming language and supports object-oriented, functional and procedure-oriented programming languages. Python has a ton of libraries and tools to offer. Python libraries such as Scikit-learn, Numpy, CNTK, etc are quite trending. It is a portable language and can be used on multiple operating systems namely Windows, Mac OS, Linux, and Unix. Cons Integration of the AI systems with non-Python infrastructure. For e.g. for an infrastructure built around Java, it would be advisable to build deep learning models using Java rather than Python. If you are a data scientist, a machine learning developer or just a domain expert like a bioinformatician who hasn’t yet learned a programming language, Python is your best bet. It is easy to learn, translate equations and logic well in few lines of code and has a rich development ecosystem. 2.  C++ C++  comes second on the list when it comes to top 5 programming languages for Artificial Intelligence development. There are cases where C++ supersedes Python even though it is not the most common language when talking about AI development. For instance, when working with an embedded environment where you don’t want a lot of overhead due to Java Virtual Machine or Python Interpreter; C++ is a perfect choice. C++ also consists of some popular libraries and frameworks in AI, machine learning and deep learning namely, Mlpack, shark, OpenNN, Caffe, Dlib, etc. Pros Execution in C++ is very fast which is why it can be the go-to language when it comes to AI projects that are time-sensitive. It offers substantial use of algorithms. It uses statistical AI techniques quite effectively. Data hiding and inheritance make it possible to reuse the existing code during the development process. It is also suitable for machine learning and Neural Networks. Cons It follows a bottom-up approach and this makes it very complex for large-scale projects. If you are a game developer, you’ve already dabbled with C++ in some form or the other. Given the popularity of C++ among developers, it goes without saying, that if you choose C++, it can definitely kickstart your AI development process to build smarter, more interactive games. 3. Java Java is a close contender to C++. From Machine Learning to Natural language processing, Java comes with a plethora of libraries for all aspects of Artificial Intelligence development. Java has all the infrastructure that you need to create your next big AI project. Some popular Java libraries and frameworks are Deeplearning4j, Weka, Java-ML, etc. Pros Java follows the once Written Read/Run Anywhere (WORA) principle. It is a time-efficient language as it can be run on any platform without the need for re-compilation every time because of Virtual Machine Technology. Java works well for search algorithms, neural networks, and NLP. It is a multi-paradigm language i.e. it supports object-oriented, procedure-oriented and functional programming languages. It is easy to debug. Cons As mentioned, Java has a complex and verbose code structure which can be a bit time-consuming as it increases the development time. If you are into development of software, web, mobile or anywhere in between, you’ve worked with Java at some point, probably you still are. Most commercial apps have Java baked in them. The familiarity and robustness that Java has to offer is a good reason to pick Java when working with AI development. This is especially relevant if you want to enter well-established domains like banking that are historically built on top of Java-based systems. 4. Scala Just like Java, Scala belongs to the JVM family. Scala is a fairly new language in the AI space but it’s finding quite a bit of recognition recently in many corporations and startups. It has a lot to offer in terms of convenience which is why developers enjoy working with it. Also, ScalaNLP, DeepLearning4j, etc are all tools and libraries that make the AI development process a bit easier with Scala. Let’s have a look at the features that make it a good choice for AI development. Pros It’s good for projects that need scalability. It combines the strengths of Functional and Imperative programming models to act as a powerful tool which helps build highly concurrent applications while reaping the benefits of an OO approach at the same time. It provides good concurrency support which helps with projects involving real-time parallelized analytics. Scala has a good open source community when it comes to statistical learning, information theory and Artificial Intelligence in general. Cons Scala falls short when it comes to machine learning libraries. Scala consists of concepts such as implicits as well as type classes. These might not be familiar to programmers coming from the object-oriented world. The learning curve in Scala is steep. Even though Scala lacks in machine learning libraries, its scalability, and concurrency support makes it a good option for AI development. With more companies such as IBM and lightbend collaborating together to use Scala for building more AI applications, it’s no secret that Scala’s use for AI development is on constant demand in the present as well as for the future. 5. R R is a language that’s catching up in the race recently for AI development. Primarily used for academic research, R is written by statisticians and it provides basic data management which makes tasks really easy. It’s not as pricey as statistical software namely Matlab or SAS, which makes it a great substitute for this software and a golden child of data science. Pros R comes with plenty packages that help boost its performance. There are packages available for pre-modeling, modeling and post modeling stages in data analysis. R is very efficient in tasks such as continuous regression, model validation, and data visualization. R being a statistical language offers very robust statistical model packages for data analysis such as caret, ggplot, dplyr, lattice, etc which can help boost the AI development process. Major tasks can be done with little code developed in an interactive environment which makes it easy for the developers to try out new ideas and verify them with varied graphics functions that come with R. Cons R’s major drawback is its inconsistency due to third-party algorithms. Development speed is quite slow when it comes to R as you have to learn new ways for data modeling. You also have to make predictions every time when using a new algorithm. R is one of those skills that’s mainly demanded by recruiters in data science and machine learning. Overall, R is a very clever language. It is freely available, runs on server as well as common hardware. R can help amp up your AI development process to a great extent. Other languages worth mentioning There are three other languages that deserve a mention in this article: Go, Lisp and Prolog. Let’s have a look at what makes these a good choice for AI development. Go Go has been receiving a lot of attention recently. There might not be as many projects available in AI development using Go as for now but the language is on its path to continuous growth these days. For instance, AlphaGo, is a first computer program in Go that was able to defeat the world champion human Go player, proves how powerful the language is in terms of features that it can offer. Pros You don’t have to call out to libraries, you can make use of Go’s existing machine learning libraries. It doesn’t consist of classes. It only consists of packages which make the code cleaner and clear. It doesn’t support inheritance which makes it easy to modify the code in Go. Cons There aren’t many solid libraries for core AI development tasks. With Go, it is possible to pull off core ML and some reinforcement learning tasks as well, despite the lack of libraries. But given other versatile features of Go, the future looks bright for this language with it finding more applications in AI development. Lisp Lisp is one of the oldest languages for AI development and as such gets an honorary mention. It is a very popular language in AI academic research and is equally effective in the AI development process as well. However, it is not such a usual choice among the developers of recent times. Also, most modern libraries in machine learning, deep learning, and AI are written in popular languages such as C++, Python, etc. But I wouldn’t write off Lisp yet. It still has an immense capacity to build some really innovative AI projects, if take the time to learn it. Pros Its flexible and extendable nature enables fast prototyping, thereby, providing developers with the needed freedom to quickly test out ideas and theories. Since it was custom built for AI, its symbolic information processing capability is above par. It is suitable for machine learning and inductive learning based projects. Recompilation of functions alongside the running program is possible which saves time. Cons Since it is an old language, not a lot of developers are well-versed with it. Also, new software and hardware have to be configured to be able to accommodate using Lisp. Given the vintage nature of Lisp for the AI world, it is quite interesting to see how things work in Lisp for AI development.  The most famous example of a lisp-based AI project is DART (Dynamic Analysis and Replanning Tool), used by the U.S. military. Prolog Finally, we have Prolog, which is another old language primarily associated with AI development and symbolic computation. Pros It is a declarative language where everything is dictated by rules and facts. It supports mechanisms such as tree-based data structuring, automatic backtracking, nondeterminism and pattern matching which is helpful for AI development. This makes it quite a powerful language for AI development. Its varied features are quite helpful in creating AI projects for different fields such as medical, voice control, networking and other such Artificial development projects. It is flexible in nature and is used extensively for theorem proving, natural language processing, non-numerical programming, and AI in general. Cons High level of difficulty when it comes to learning Prolog as compared to other languages. Apart from the above-mentioned features, implementation of symbolic computation in other languages can take up to tens of pages of indigestible code. But the same algorithms implemented in Prolog results in a clear and concise program that easily fits on one page. So those are the top programming languages for Artificial Intelligence development. Choosing the right language eventually depends on the nature of your project. If you want to pick an easy to learn language go for Python but if you are working on a project where speed and performance are most critical then pick C++. If you are a creature of habit, Java is a good choice. If you are a thrill seeker who wants to learn a new and different language, choose Scala, R or Go, and if you are feeling particularly adventurous, explore the quaint old worlds of Lisp or Prolog. Why is Python so good for AI and Machine Learning? 5 Python Experts Explain Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss 15 Useful Python Libraries to make your Data Science tasks Easier
Read more
  • 0
  • 0
  • 8845

article-image-5-ways-machine-learning-is-transforming-digital-marketing
Amey Varangaonkar
04 Jun 2018
7 min read
Save for later

5 ways Machine Learning is transforming digital marketing

Amey Varangaonkar
04 Jun 2018
7 min read
The enterprise interest in Artificial Intelligence is surging. In an era of cut-throat competition where it’s either do or die, businesses have realized the transformative value of AI to gain an upper hand over their rivals. Given its direct contribution to business revenue, it comes as no surprise that marketing has become one of the major application areas of machine learning. Per Capgemini, 84% of marketing organizations are implementing Artificial Intelligence in 2018, in some capacity 3 out of the 4 organizations implementing AI techniques have managed to increase the sales of their products and services by 10% or more. In this article, we look at 5 innovative ways in which machine learning is being used to enhance digital marketing. Efficient lead generation and customer acquisition One of the major keys to drive business revenue is getting more customers on board who will buy your products or services repeatedly. Machine learning comes in handy to identify potential leads and convert those leads into customers. With the help of the pattern recognition techniques, it is possible to understand a particular lead’s behavioral and purchase trends. Through predictive analytics, it is then possible to predict if a particular lead will buy the product or not. Then, that lead is put into the marketing sales funnel to perform targeted marketing campaigns which may ultimately result into a purchase. A cautionary note here - with GDPR (General Data Protection Regulation) in place across the EU (European Union), there are restrictions in the manner AI algorithms can be used to make automated decisions based on the consumer data. This will make it imperative for the businesses to strictly follow the regulation and operate under its purview, or they could face heavy penalties. As long as businesses respect privacy and follow basic human decency such as asking for permission to use a person’s data or informing them about how their data will be used, marketers can reap the benefits of data driven marketing like never before. It all boils down to applying common sense while handling personal data, as one GDPR expert put it. But we all know how uncommon, that sense is! Customer churn prediction is now possible ‘Customer churn rate’ is a popular marketing term referring to the number of customers who opt out of a particular service offered by the company over a given time period. The churn time is calculated based on the customer’s last interaction with the service or the website. It is crucial to track the churn rate as it is a clear indicator of the progress - or the lack of it - that a business is making. Predicting the customer churn rate is difficult - especially for e-commerce businesses selling a product - but it is not impossible thanks to machine learning. By understanding the historical data and the user’s past website usage patterns, these techniques can help a business identify the customers who are most likely to churn out soon and when that is expected to happen. Appropriate measures can then be taken to retain such customers - by giving special offers and discounts, timely follow-up emails, and so on - without any human intervention. American entertainment giants Netflix make perfect use of churn prediction to keep the churn rate at just 9%, lower than any of the subscription streaming services out there today. Not just that, they also manage to market their services to drive more customer subscriptions. Dynamic pricing made easy In today’s competitive world, products need to be priced optimally. It has become imperative that companies define an extremely competitive and relevant pricing for their products, or else the customers might not buy them. On top of this, there are fluctuations in the demand and supply of the product, which can affect the product’s pricing strategy. With the use of machine learning algorithms, it is now possible to forecast the price elasticity by considering various factors such as the channel on which the product is sold. Other  factors taken into consideration could be the sales period, the product’s positioning strategy or the customer demand. For example, eCommerce giants Amazon and eBay tweak their product prices on a daily basis. Their pricing algorithms take into account factors such as the product’s popularity among the customers, maximum discount that can be offered, and how often the customer has purchased from the website. This strategy of dynamic pricing is now being adopted by almost all the big retail companies even in their physical stores. There are specialized software available which are able to leverage machine learning techniques to set dynamic prices to the products. Competera is one such pricing platform which transforms retail through ongoing, timely, and error-free pricing for category revenue growth and improvements in customer loyalty tiers. To know more about how dynamic pricing actually works, check out this Competitoor article. Customer segmentation and radical personalization Every individual is different, and has unique preferences, likes and dislikes. With machine learning, marketers can segment users into different buyer groups based on a variety of factors such as their product preferences, social media activities, their Google search history and much more. For instance, there are machine learning techniques that can segment users based on who loves to blog about food, or loves to travel, or even which show they are most likely to watch on Netflix! The website can then recommend or market products to these customers accordingly. Affinio is one such platform used for segmenting customers based on their interests. Content and campaign personalization is another widely-recognized use-case of machine learning for marketing. Machine learning algorithms are used to build recommendation systems that take into consideration the user’s online behavior and website usage to analyse and recommend products that he/she is likely to buy. A prime example of this is Google’s remarketing strategy, which tries to reconnect with the customers who leave the website without buying anything by showing them relevant ads across different devices. The best part about recommendation systems is that they are able to recommend two completely different products to two customers with a different usage pattern. Incorporating them within the website has turned out to be a valuable strategy to increase the customer’s loyalty and the overall lifetime value. Improving customer experience Gone are the days when the customer who visited a website had to use the ‘Contact Me’ form in case of any query, and an executive would get back with the answer. These days, chatbots are integrated in almost every ecommerce website to answer ad-hoc customer queries, and even suggest them products that fit their criteria. There are live-chat features included in these chatbots as well, which allow the customers to interact with the chatbots and understand the product features before they buy any product. For example, IBM Watson has a really cool feature called the Tone Analyzer. It parses the feedback given by the customer and identifies the tone of the feedback - if it’s angry, resentful, disappointed, or happy. It is then possible to take appropriate measures to ensure that the disgruntled customer is satisfied, or to appreciate the customer’s positive feedback - whatever may be the case. Marketing will only get better with machine learning Highly accurate machine learning algorithms, better processing capabilities and cloud-based solutions are now making it possible for companies to get the most out of AI for their marketing needs. Many companies have already adopted machine learning to boost their marketing strategy, with major players such as Google and Facebook already leading the way. Safe to say many more companies - especially small and medium-sized businesses - are expected to follow suit in the near future. Read more How machine learning as a service is transforming cloud Microsoft Open Sources ML.NET, a cross-platform machine learning framework Active Learning : An approach to training machine learning models efficiently
Read more
  • 0
  • 30
  • 3388