Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - Artificial Intelligence

84 Articles
article-image-2018-data-science-part-2-of-3
Savia Lobo
04 Jan 2018
7 min read
Save for later

2018 new year resolutions to thrive in the Algorithmic World - Part 2 of 3

Savia Lobo
04 Jan 2018
7 min read
In our first resolution, we talked about learning the building blocks of data science i.e developing your technical skills. In this second resolution, we walk you through steps to stay relevant in your field and how to dodge jobs that have a high possibility of getting automated in the near future. 2nd Resolution: Stay relevant in your field even as job automation is on the rise (Time investment: half an hour every day, 2 hours on weekends) Once you have got your fundamentals right, it is important to stay relevant through continuous learning and reskilling. In addition to honing your technical skills, you must also deepen your domain expertise and keep adding to your portfolio of soft skills to stay ahead of not the just human competition but also to thrive in an automated job market. We list below some simple ways to do all these in a systematic manner. All it requires is a commitment of half an hour to one hour of your time daily for your professional development. 1. Commit to and execute a daily learning-practice-participation ritual Here are some ways to stay relevant. Follow data science blogs and podcasts relevant to your area of interest. Here are some of our favorites: Data Science 101, the journey of a data scientist The Data Skeptic for a healthy dose of scientific skepticism Data Stories for data visualization This Week in Machine Learning & AI for informative discussions with prominent people in the data science/machine learning community Linear Digressions, a podcast co-hosted by a data scientist and a software engineer attempting to make data science accessible You could also follow individual bloggers/vloggers in this space like Siraj Raval, Sebastian Raschka, Denny Britz, Rodney Brookes, Corinna Cortes, Erin LeDell Newsletters are a great way to stay up-to-date and to get a macro-level perspective. You don’t have to spend an awful lot of time doing the research yourself on many different subtopics. So, subscribe to useful newsletters on data science. You can subscribe to our newsletter here. It is a good idea to subscribe to multiple newsletters on your topic of interest to get a balanced and comprehensive view of the topic. Try to choose newsletters that have distinct perspectives, are regular and are published by people passionate about the topic. Twitter gives a whole new meaning to ‘breaking news’. Also, it is a great place to follow contemporary discussions on topics of interest where participation is open to all. When done right, it can be a gold mine for insights and learning. But often it is too overwhelming as it is viewed as a broadcasting marketing tool. Follow your role models in data science on Twitter. Or you could follow us on Twitter @PacktDataHub for curated content from key data science influencers and our own updates about the world of data science. You could also click here to keep a track of 737 twitter accounts most followed by the members of the NIPS2017 community. Quora, Reddit, Medium, and StackOverflow are great places to learn about topics in depth when you have a specific question in mind or a narrow focus area. They help you get multiple informed opinions on topics. In other words, when you choose a topic worth learning, these are great places to start. Follow them up by reading books on the topic and also by reading the seminal papers to gain a robust technical appreciation. Create a Github account and participate in Kaggle competitions. Nothing sticks as well as learning by doing. You can also browse into Data Helpers, a site voluntarily set up by Angela Bass where interested data science people can offer to help newcomers with their queries on entering the required field and anything else. 2. Identify your strengths and interests to realign your career trajectory OK, now that you have got your daily learning routine in place, it is time to think a little more strategically about your career trajectory, goals and eventually the kind of work you want to be doing. This means: Getting out of jobs that can be automated Developing skills that augment or complement AI driven tasks Finding your niche and developing deep domain expertise that AI will find hard to automate in the near future Here are some ideas to start thinking about some of the above ideas. The first step is to assess your current job role and understand how likely it is to get automated. If you are in a job that has well-defined routines and rules to follow, it is quite likely to go the AI job apocalypse route. Eg: data entry, customer support that follows scripts, invoice processing, template-based software testing or development etc. Even “creative” job such as content summarization, news aggregation, template-based photo-editing/video editing etc fall in this category. In the world of data professionals, jobs like data cleaning, database optimization, feature generation, even model building (gasp!) among others could head the same way given the right incentives. Choose today to transition out of jobs that may not exist in the next 10 years. Then instead of hitting the panic button, invest in redefining your skills in a way that would be helpful in the long run. If you are a data professional, skills such as data interpretation, data-driven storytelling,  data pipeline architecture and engineering, feature engineering, and others that require a high level of human judgment skills are least likely to be replicated by machines anytime soon. By mastering skills that complement AI driven tasks and jobs, you should be able to present yourself as a lucrative option to potential employers in a highly competitive job market space.    In addition to reskilling, try to find your niche and dive deep. By niche, we mean, if you are a data scientist, choose a specific technical aspect in your field, something that interests you. It could be anything from computer vision to NLP to even a class of algorithms like neural nets or a type of problem that machine learning solves such as recommender systems or classification systems. It could even be a specific phase of a data science project such as data visualization or data pipeline engineering. Master your niche while keeping up with what’s happening in other related areas. Next, understand where your strengths lie. In other words, what your expertise is, what industry or domain do you understand well or have amassed experience in. For instance, NLP, a subset of machine learning abilities, can be applied to customer reviews to mine useful insights, perform sentiment analysis, build recommendation systems in conjunction with predictive modeling among other things. In order to build an NLP model to mine some kind of insights from customer feedback, we must have some idea of what we are looking for. Your domain expertise can be of great value here. If you are in the publishing business, you would know what keywords matter most in reviews and more importantly why they matter and how to convert the findings into actionable insights - aspects that your model or even a machine learning engineer outside your industry may not understand or appreciate. Take the case of Brendan Frey and the team of researchers at Deep Genomics as a real-world example. They applied AI and machine learning (their niche expertise) to build a neural network to identify pathological mutations in genes (their domain expertise). Their knowledge of how genes get created and how they work, what a mutation looks like etc helped them feed the features and hyperparameters into their model. Similarly, you can pick up any of your niche skills and apply them in whichever field you find interesting and worthwhile. Based on your domain knowledge and area of expertise, it could range from sorting a person into a Hogwarts house because you are a Harry Potter fan to sorting them into potential patients with a high likelihood to develop diabetes because you have a background in biotechnology.   This brings us to the next resolution where we cover aspects related to how your work will come to define you and why it matters that you choose your projects well.   
Read more
  • 0
  • 0
  • 3158

article-image-2018-new-year-resolutions-algorithmic-world-part-1-of-3
Sugandha Lahoti
03 Jan 2018
6 min read
Save for later

2018 new year resolutions to thrive in an Algorithmic World - Part 1 of 3

Sugandha Lahoti
03 Jan 2018
6 min read
We often think of Data science and machine learning as skills essential to a niche group of researchers, data scientists, and developers. But the world as we know today revolves around data and algorithms, just as it used to revolve around programming a decade back. As data science and algorithms get integrated into all aspects of businesses across industries, data science like Microsoft Excel will become ubiquitous and will serve as a handy tool which makes you better at your job no matter what your job is. Knowing data science is key to having a bright career in this algoconomy (algorithm driven economy). If you are big on new year resolutions, make yourself a promise to carve your place in the algorithm-powered world by becoming data science savvy. Follow these three resolutions to set yourself up for a bright data-driven career. Get the foundations right: Start with the building blocks of data science, i.e. developing your technical skills. Stay relevant: Keep yourself updated on the latest developments in your field and periodically invest in reskilling and upskilling. Be mindful of your impact: Finally, always remember that your work has real-world implications. Choose your projects wisely and your project goals, hypotheses, and contributors with even more care. In this three-part series, we expand on how data professionals could go about achieving these three resolutions. But the principles behind the ideas are easily transferable to anyone in any job. Think of them as algorithms that can help you achieve your desired professional outcome! You simply need to engineer the features and fine-tune the hyperparameters specific to your industry and job role. 1st Resolution: Learn the building blocks of data science If you are interested in starting a career in data science or in one that involves data, here is a simple learning roadmap for you to develop your technical skills. Start off with learning a data-friendly programming language, one that you find easy and interesting. Next, brush up your statistics skills. Nothing fancy, just your high school math and stats would do nicely. Next, learn about algorithms - what they do, what questions they answer, how many types are there and how to write one. Finally, you can put all that learning to practice by building models on top of your choice of Machine Learning framework. Now let’s see, how you can accomplish each of these tasks 1. Learn Python or any another popular data friendly programming language you find interesting (Learning period: 1 week - 2 months) If you see yourself as a data scientist in the near future, knowing a programming language is one of the first things to check off your list. We suggest you learn a data-friendly programming language like Python or R. Python is a popular choice because of its strong, fast, and easy computational capabilities for the Data Science workflow. Moreover, because of a large and active community, the likelihood of finding someone in your team or your organization who knows Python is quite high, which is an added advantage. “Python has become the most popular programming language for data science because it allows us to forget about the tedious parts of programming and offers us an environment where we can quickly jot down our ideas and put concepts directly into action.” - Sebastian Raschka We suggest learning the basics from the book Learn Python in 7 days by Mohit, Bhaskar N. Das. Then you can move on to learning Python specifically for data science with Python Data Science Essentials by Alberto Boschetti. Additionally, you can learn R, which is a highly useful language when it comes to statistics and data. For learning R, we recommend R Data science Essentials by Raja B. Koushik. You can learn more about how Python and R stand against each other in the data science domain here. Although R and Python are the most popular choices for new developers and aspiring data scientists, you can also use Java for data science, if that is your cup of tea. Scala is another alternative. 2. Brush up on Statistics (Learning period: 1 week - 3 weeks) While you are training your programming muscle, we recommend that you brush through basic mathematics (probability and statistics). Remember, you already know everything to get started with data science from your high school days. You just need to refresh your memory with a little practice. A good place to start is to understand concepts like standard deviation, probability, mean, mode, variance, kurtosis among others. Now, your normal high-school books should be enough to get started, however, an in-depth understanding is required to leverage the power of data science. We recommend the book Statistics for Data Science by James D. Miller for this. 3. Learn what machine learning algorithms do and which ones to learn (Learning period: 1 month - 3 months) Machine Learning is a powerful tool to make predictions based on huge amounts of data. According to a recent study, in the next ten years, ML algorithms are expected to replace a quarter of the jobs across the world, in fields like transport, manufacturing, architecture, healthcare and many others. So the next step in your data science journey is learning about machine learning algorithms. There are new algorithms popping up almost every day. We’ve collated a list of top ten algorithms that you should learn to effectively design reliable and robust ML systems. But fear not, you don’t need to know all of them to get started. Start with some basic algorithms that are majorly used in the real world applications like linear regression, naive bayes, and decision trees. 4. Learn TensorFlow, Keras, or any other popular machine learning framework (Learning period: 1 month - 3 months) After you have familiarized yourself with some of the machine learning algorithms, it is time you put that learning to practice by building models based on those algorithms. While there are many cloud-based machine learning options that have click-based model building features available, the best way to learn a skill is to get your hands dirty. There is a growing range of frameworks that make it easy to build complex models while allowing for high degrees of customization. Here is a list of top 10 deep learning frameworks at your disposal to choose from. Our favorite pick is TensorFlow. It’s Python-based, backed by Google, has a very good documentation, and there are tons of tutorials and videos available on the internet to guide you. You can find a comprehensive list of books for learning Tensorflow here. We also recommend learning Keras, which is a good option if you have some knowledge of Python programming and want to get started with deep learning. Try the book Deep Learning with Keras, by Antonio Gulli and Sujit Pal, to get you started. If you find learning from multiple sources daunting, just learn from Sebastian Raschka’s Python machine learning book.   Once you have got your fundamentals right, it is important to stay relevant through continuous learning and reskilling. Check out part 2 where we explore how you could about doing this in a systematic and time efficient manner. In part 3, we look at ways you can own your work and become aware of its outcome.
Read more
  • 0
  • 0
  • 2200

article-image-2017-generative-adversarial-networks-gans-research-milestones
Savia Lobo
30 Dec 2017
9 min read
Save for later

2017 Generative Adversarial Networks (GANs) Research Milestones

Savia Lobo
30 Dec 2017
9 min read
Generative Adversarial Models, introduced by Ian Goodfellow, are the next big revolution in the field of deep learning. Why? Because of their ability to perform semi-supervised learning where there is a vast majority of data is unlabelled. Here, GANs can efficiently carry out image generation tasks and other tasks such as converting sketches to an image, conversion of satellite images to a map, and many other tasks. GANs are capable of generating realistic images in any circumstances, for instance, giving some text written in a particular handwriting as an input to the generative model in order to generate more texts in the similar handwriting. The speciality of these GANs is that as compared to discriminative models, these generative models make use of a joint distribution probability to generate more likely samples. In short, these generative models or GANs are an improvisation to the discriminative models. Let’s explore some of the research papers that are contributing to further advancements in GANs. CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks This paper talks about CycleGANs, a class of generative Adversarial networks that carry out Image-to-Image translation. This means, capturing special characteristics of one image collection and figuring out how these characteristics could be translated into the other image collection, all in the absence of any paired training examples. CycleGANs method can also be applied in variety of applications such as Collection Style Transfer, Object Transfiguration, season transfer and photo enhancement. Cycle GAN architecture Source: GitHub CycleGANs are built upon the advantages of PIX2PIX architecture. The key advantage of CycleGANs model is, it allows to point the model at two discrete, unpaired collection of images.For example, one image collection say Group A, would consist photos of landscapes in summer whereas Group B would include photos of  landscapes in winter. The CycleGAN model can learn to translate the images between these two aesthetics without the need to merge tightly correlated matches together into a single X/Y training image. Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks The way CycleGANs are able to learn such great translations without having explicit X/Y training images involves introducing the idea of a full translation cycle to determine how good the entire translation system is, thus improving both generators at the same time. Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Currently, the applications of CycleGANs can be seen in Image-to-Image translation and video translations. For example they can be seen used in Animal Transfiguration, Turning portrait faces into doll faces, and so on. Further ahead, we could potentially see its implementations in audio, text, etc., would help us in generating new data for training. Although this method has compelling results, it also has some limitations The geometric changes within an image are not fully successful (for instance, the cat to dog transformation showed minute success). This could be caused by the generator architecture choices, which are tailored for good performance on the appearance changes. Thus, handling more varied and extreme transformations, especially geometric changes, is an important problem. Failure caused by the distribution characteristics of the training datasets. For instance, in the horse to zebra transfiguration, the model got confused as it was trained on the wild horse and zebra synsets of ImageNet, which does not contain images of a person riding a horse or zebra. These and some other limitations are described in the research paper. To read more about CycleGANs in detail visit the link here. Wasserstein GAN In this paper, we get an exposure to Wasserstein GANs and how they overcomes the drawbacks in original GANs. Although GANs have shown a drastic success in realistic image generation, the training however is not that easy as the process is slow and unstable. In the paper proposed for WGANs, it is empirically shown that WGANs cure the training problem. Wasserstein distance, also known as Earth Mover’s (EM) distance, is a measure of distance between two probability distributions. The basic idea in WGAN is to replace the loss function so that there always exists a non-zero gradient. This can be done using Wasserstein distance between the generator distribution and the data distribution. Training these WGANs does not require keeping a balance in training of the discriminator and the generator. It also doesn’t require a design of the network architecture too. One of the most fascinating practical benefits of WGANs is the ability to continuously estimate the EM distance by training the discriminator to an optimal level. The learning curves when used for plotting are useful for debugging and hyperparameter searches. These curves also correlate well with the observed sample quality and improved stability of the optimization process. Thus, Wasserstein GANs are an alternative to traditional GAN training with features such as: Improvement in the stability of learning Elimination of problems like mode collapse Provide meaningful learning curves useful for debugging and hyperparameter searches Furthermore, the paper also showcases that the corresponding optimization problem is sound, and provides extensive theoretical work highlighting the deep connections to other distances between distributions. The Wasserstein GAN has been utilized to train a language translation machine. The condition here is that there is no parallel data between the word embeddings between the two languages. Wasserstein GANs have been used to perform English-Russian and English-Chinese language mappings. Limitations of WGANs: WGANs suffer from unstable training at times, when one uses a momentum based optimizer or when one uses high learning rates. Includes slow convergence after weight clipping, especially when clipping window is too large. It also suffers from the vanishing gradient problem when the clipping window is too small. To have a detailed understanding of WGANs have a look at the research paper here. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets This paper describes InfoGAN (Information-theoretic extension to the Generative Adversarial Network). It can learn disentangled representations in a completely unsupervised manner. In traditional GANs, learned dataset is entangled i.e. encoded in a complex manner within the data space. However, if the representation is disentangled, it would be easy to implement and easy to apply tasks on it. InfoGAN solves the entangled data problem in GANs.   Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, extracts poses of objects correctly irrespective of the lighting conditions within the 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hairstyles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. InfoGAN does not require any kind of supervision. In comparison to InfoGAN, the only other unsupervised method that learns disentangled representations is hossRBM, a higher-order extension of the spike-and-slab restricted Boltzmann machine which disentangles emotion from identity on the Toronto Face Dataset. However, hossRBM can only disentangle discrete latent factors, and its computation cost grows exponentially in the number of factors. Whereas, InfoGAN can disentangle both discrete and continuous latent factors, scale to complicated datasets, and typically requires no more training time than regular GAN. In the experiments given in the paper, firstly the comparison of  InfoGAN with prior approaches on relatively clean datasets is shown. Another experiment shown is, where InfoGAN can learn interpretable representations on complex datasets (here no previous unsupervised approach is known to learn representations of comparable quality.) Thus, InfoGAN is completely unsupervised and learns interpretable and disentangled representations on challenging datasets. Additionally, InfoGAN adds only negligible computation cost on top of GAN and is easy to train. The core idea of using mutual information to induce representation can be applied to other methods like VAE (Variational AutoEncoder) in future. The other possibilities with InfoGAN in future could be,learning hierarchical latent representations, improving semi-supervised learning with better codes, and using InfoGAN as a high-dimensional data discovery tool. To know more about this research paper in detail, visit the link given here. Progressive growing of GANs for improved Quality, Stability, and Variation This paper describes a brand new method for training your Generative Adversarial Networks. The basic idea here is to train both the generator and the discriminator progressively. This means, starting from a low resolution and adding new layers so that the model increases in providing images with finer details as training progresses. Such a method speeds up the training and also stabilizes it to a greater extent, which in turn produces images of unprecedented quality. For instance, a higher quality version of the CELEBA images dataset that provides output resolutions up to 10242 pixels.   Source: https://arxiv.org/pdf/1710.10196.pdf When new layers are added to the networks, they fade in smoothly. This helps in avoiding the sudden shocks to the already well-trained, smaller resolution layers. Also, the progressive training has various other benefits. The generation of smaller images is substantially more stable because there is less class information and fewer modes By increasing the resolution little by little, we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors to e.g. 10242 images Progressive growing of GANs also reduces the training time. In addition to this, most of the iterations are done at lower resolutions, and the quality of the result obtained is upto 2-6 times faster, depending on the resolution of the final output. Thus, by progressively training GANs results into better quality, stability, and variation in images. This may also lead to true photorealism in near future. The paper concludes with the fact that, though there are certain limitations with this training method, which include semantic sensibility and understanding dataset-dependent constraints(such as certain objects being straight rather than curved). This leaves a lot to be desired from GANs and there is also room for improvement in the micro-structure of the images. To have a thorough understanding of this research paper, read the paper here.  
Read more
  • 0
  • 0
  • 3112
Banner background image

article-image-25-startups-machine-learning-differently-2018
Fatema Patrawala
29 Dec 2017
14 min read
Save for later

25 Startups using machine learning differently in 2018: From farming to brewing beer to elder care

Fatema Patrawala
29 Dec 2017
14 min read
What really excites me about data science and by extension machine learning is the sheer number of possibilities! You can think of so many applications off the top of your head: robo-advisors, computerized lawyers, digital medicine, even automating VC decisions when they invest in startups. You can even venture into automation of art and music, algorithms writing papers which are indistinguishable from human-written papers. It's like solving a puzzle, but a puzzle that's meaningful and that has real world implications. The things that we can do today weren’t possible 5 years ago, and this is largely thanks to growth in computational power, data availability, and the adoption of the cloud that made accessing these resources economical for everyone, all key enabling factors for the advancement of Machine learning and AI. Having witnessed the growth of data science as discipline, industries like finance, health-care, education, media & entertainment, insurance, retail as well as energy has left no stone unturned to harness this opportunity. Data science has the capability to offer even more; and we will see the wide range of applications in the future in places haven’t even been explored. In the years to come, we will increasingly see data powered/AI enabled products and services take on roles traditionally handled by humans as they required innately human qualities to successfully perform. In this article we have covered some use cases of Data Science being used differently and start-ups who have practically implemented it: The Nurturer: For elder care The world is aging rather rapidly. According to the World Health Organization, nearly two billion people across the world are expected to be over 60 years old by 2050, a figure that’s more than triple what it was in 2000. In order to adapt to their increasingly aging population, many countries have raised the retirement age, reducing pension benefits, and have started spending more on elderly care. Research institutions in countries like Japan, home to a large elderly population, are focusing their R&D efforts on robots that can perform tasks like lifting and moving chronically ill patients, many startups are working on automating hospital logistics and bringing in virtual assistance. They also offer AI-based virtual assistants to serve as middlemen between nurses and patients, reducing the need for frequent in-hospital visits. Dr Ben Maruthappu, a practising doctor, has brought a change to the world of geriatric care with an AI based app Cera. It is an on-demand platform to aid the elderly in need. The Cera app firmly puts itself in the category of Uber & Amazon, whereby it connects elderly people in need of care with a caregiver in a matter of few hours. The team behind this innovation also plans to use AI to track patients’ health conditions and reduce the number of emergency patients admitted in hospitals. A social companion technology - Elliq created by Intuition Robotics helps older adults stay active and engaged with a proactive social robot that overcomes the digital divide. AliveCor, a leading FDA-cleared mobile heart solution helps save lives, money, and has brought modern healthcare alive into the 21st century. The Teacher: Personalized education platform for lifelong learning With children increasingly using smartphones and tablets and coding becoming a part of national curricula around the world, technology has become an integral part of classrooms. We have already witnessed the rise and impact of education technology especially through a multitude of adaptive learning platforms that allow learners to strengthen their skills and knowledge - CBTs, LMSes, MOOCs and more. And now virtual reality (VR) and artificial intelligence (AI) are gaining traction to provide us with lifelong learning companion that can accompany and support individuals throughout their studies - in and beyond school . An AI based educational platform learns the amount of potential held by each particular student. Based on this data, tailored guidance is provided to fix mistakes and improvise on the weaker areas. A detailed report can be generated by the teachers to help them customise lesson plans to best suit the needs of the student. Take Gruff Davies’ Kwiziq for example. Gruff with his team leverage AI to provide a personalised learning experience for students based on their individual needs. Students registered on the platform get an advantage of an AI based language coach which asks them to solve various micro quizzes. Quiz solutions provided by students are then turned into detailed “brain maps”.  These brain maps are further used to provide tailored instructions and feedback for improvement. Other startup firms like Blippar specialize in Augmented reality for visual and experiential learning. Unelma Platforms, a software platform development company provides state-of-the-art software for higher-education, healthcare and business markets. The Provider: Farming to be more productive, sustainable and advanced Though farming is considered the backbone of many national economies especially in the developing world, there is often an outdated view of it involving a small, family-owned lands where crops are hand harvested. The reality of modern-day farms have had to overhaul operations to meet demand and remain competitively priced while adapting to the ever-changing ways technology is infiltrating all parts of life. Climate change is a serious environmental threat farmers must deal with every season: Strong storms and severe droughts have made farming even more challenging. Additionally lack of agricultural input, water scarcity, over-chemicalization in fertilizers, water & soil pollution or shortage of storage systems has made survival for farmers all the more difficult.   To overcome these challenges, smart farming techniques are the need of an hour for farmers in order to manage resources and sustain in the market. For instance, in a paper published by arXiv, the team explains how they used a technique known as transfer learning to teach the AI how to recognize crop diseases and pest damage.They utilized TensorFlow, to build and train a neural network of their own, which involved showing the AI 2,756 images of cassava leaves from plants in Tanzania. Their efforts were a success, as the AI was able to correctly identify brown leaf spot disease with 98 percent accuracy. WeFarm, SaaS based agritech firm, headquartered in London, aims to bridge the connectivity gap amongst the farmer community. It allows them to send queries related to farming via text message which is then shared online into several languages. The farmer then receives a crowdsourced response from other farmers around the world. In this way, a particular farmer in Kenya can get a solution from someone sitting in Uganda, without having to leave his farm, spend additional money or without accessing the internet. Benson Hill Bio-systems, by Matthew B. Crisp, former President of Agricultural Biotechnology Division, has differentiated itself by bringing the power of Cloud Biology™ to agriculture. It combines cloud computing, big data analytics, and plant biology to inspire innovation in agriculture. At the heart of Benson Hill is CropOS™, a cognitive engine that integrates crop data and analytics with the biological expertise and experience of the Benson Hill scientists. CropOS™ continuously advances and improves with every new dataset, resulting in the strengthening of the system’s predictive power. Firms like Plenty Inc and Bowery Farming Inc are nowhere behind in offering smart farming solutions. Plenty Inc is an agriculture technology company that develops plant sciences for crops to flourish in a pesticide and GMO-free environment. While Bowery Farming uses high-tech approaches such as robotics, LED lighting and data analytics to grow leafy greens indoors. The Saviour: For sustainability and waste management The global energy landscape continues to evolve, sometimes by the nanosecond, sometimes by the day. The sector finds itself pulled to economize and pushed to innovate due to a surge in demand for new power and utilities offerings. Innovations in power-sector technology, such as new storage battery options and smartphone-based thermostat apps, AI enabled sensors etc; are advancing at a pace that has surprised developers and adopters alike. Consumer’s demands for such products have increased. To meet this, industry leaders are integrating those innovations into their operations and infrastructure as rapidly as they can. On the other hand, companies pursuing energy efficiency have two long-standing goals — gaining a competitive advantage and boosting the bottom line — and a relatively new one: environmental sustainability. Realising the importance of such impending situations in the industry, we have startups like SmartTrace offering an innovative cloud-based platform to quickly manage waste at multiple levels. This includes bridging rough data from waste contractors, extrapolating to volume, EWC, finance and Co2 statistics. Data extracted acts as a guide to improve methodology, educate, strengthen oversight and direct improvements to the bottom line, as well as environmental outcomes. One Concern provides damage estimates using artificial intelligence on natural phenomena sciences. Autogrid organizes energy data and employs big data analytics to generate real-time predictions to create actionable data. The Dreamer: For lifestyle and creative product development and design Consumers in our modern world continually make multiple decisions with regard to product choice due to many competing products in the market.Often those choices boil down to whether it provides better value than others either in terms of product quality, price or by aligning with their personal beliefs and values.Lifestyle products and brands operate off ideologies, hoping to attract a relatively high number of people and ultimately becoming a recognized social phenomenon. While ecommerce has leveraged data science to master the price dimension, here are some examples of startups trying to deconstruct the other two dimensions: product development and branding. I wonder if you have ever imagined your beer to be brewed by AI? Well, now you can with IntelligentX. The Intelligent X team claim to have invented the world's first beer brewed by Artificial intelligence. They also plan to craft a premium beer using complex machine learning algorithms which can improve itself from the feedback given by its customers. Customers are given to try one of their four bottled conditioned beers, after the trial they are asked by their AI what they think of the beer, via an online feedback messenger. The data then collected is used by an algorithm to brew the next batch. Because their AI is constantly reacting to user feedback, they can brew beer that matches what customers want, more quickly than anyone else can. What this actually means that the company gets more data and customers get a customized fresh beer! In the lifestyle domain, we have Stitch Fix which has brought a personal touch to the online shopping journey. They are no regular other apparel e-commerce company. They have created a perfect formula for blending human expertise with the right amount of Data Science to serve their customers. According to Katrina Lake, Founder, and CEO, "You can look at every product on the planet, but trying to figure out which one is best for you is really the challenge” and that’s where Stitch Fix has come into the picture. The company is disrupting traditional retail by bridging the gap of personalized shopping, that the former could not achieve. To know how StitchFix uses Full Stack Data Science read our detailed article. The Writer: From content creation to curation to promotion In the publishing industry, we have seen a digital revolution coming in too. Echobox are one of the pioneers in building AI for the publishing industry. Antoine Amann, founder of Echobox, wrote in a blog post that they have "developed an AI platform that takes large quantity of variables into account and analyses them in real time to determine optimum post performance". Echobox pride itself to currently work with Facebook and Twitter for optimizing social media content, perform advanced analytics with A/B testing and also curate content for desired CTRs. With global client base like The Le Monde, The Telegraph, The Guardian etc. they have conveniently ripped social media editors. New York-based startup Agolo uses AI to create real-time summaries of information. It initially use to curate Twitter feeds in order to focus on conversations, tweets and hashtags that were most relevant to its user's preferences. Using natural language processing, Agolo scans content, identifies relationships among the information sources, and picks out the most relevant information, all leading to a comprehensive summary of the original piece of information. Other websites like Grammarly, offers AI-powered solutions to help people write, edit and formulate mistake-free content. Textio came up with augmented writing which means every time you wrote something and you would come to know ahead of time exactly who is going to respond. It basically means writing which is supported by outcomes in real time. Automated Insights, Creator of Wordsmith, the natural language generation platform enables you to produce human-sounding narratives from data. The Matchmaker: Connecting people, skills and other entities AI will make networking at B2B events more fun and highly productive for business professionals. Grip, a London based startup, formerly known as Network, rebranded itself in the month of April, 2016. Grip is using AI as a platform to make networking at events more constructive and fruitful. It acts as a B2B matchmaking engine that accumulates data from social accounts (LinkedIn, Twitter) and smartly matches the event registration data. Synonymous to Tinder for networking, Grip uses advanced algorithms to recommend the right people and presents them with an easy to use swiping interface feature. It also delivers a detailed report to the event organizer on the success of the event for every user or a social Segment. We are well aware of the data scientist being the sexiest job of the 21st century. JamieAi harnessing this fact connects technical talent with data-oriented jobs organizations of all types and sizes. The start-up firm has combined AI insights and human oversight to reduce hiring costs and eliminate bias.  Also, third party recruitment agencies are removed from the process to boost transparency and efficiency in the path to employment. Another example is Woo.io, a marketplace for matching tech professionals and companies. The Manager: Virtual assistants of a different kind Artificial Intelligence can also predict how much your household appliance will cost on your electricity bill. Verv, a producer of clever home energy assistance provides intelligent information on your household appliances. It helps its customers with a significant reduction on their electricity bills and carbon footprints. The technology uses machine learning algorithms to provide real-time information by learning how much power and money each device is using. Not only this, it can also suggest eco-friendly alternatives, alert homeowners of appliances in use for a longer duration and warn them of any dangerous activity when they aren’t present at home. Other examples include firms like Maana which manages machines and improves operational efficiencies in order to make fast data driven decisions. Gong.io, acts as a sales representative’s assistant to understand sales conversations resulting into actionable insights. ObEN, creates complete virtual identities for consumers and celebrities in the emerging digital world. The Motivator: For personal and business productivity and growth A super cross-functional company Perkbox, came up with an employee engagement platform. Saurav Chopra founder of Perkbox believes teams perform their best when they are happy and engaged! Hence, Perkbox helps companies boost employee motivation and create a more inspirational atmosphere to work. The platform offers gym services, dental discounts and rewards for top achievers in the team to firms in UK. Perkbox offers a wide range of perks, discounts and tools to help organizations retain and motivate their employees. Technologies like AWS and Kubernetes allow to closely knit themselves with their development team. In order to build, scale and support Perkbox application for the growing number of user base. So, these are some use cases where we found startups using data science and machine learning differently. Do you know of others? Please share them in the comments below.
Read more
  • 0
  • 0
  • 4642

article-image-18-striking-ai-trends-to-watch-in-2018-part-2
Sugandha Lahoti
28 Dec 2017
12 min read
Save for later

18 striking AI Trends to watch in 2018 - Part 2

Sugandha Lahoti
28 Dec 2017
12 min read
We are back with Part 2 of our analysis of intriguing AI trends in 2018 as promised in our last post.  We covered the first nine trends in part 1 of this two-part prediction series. To refresh your memory, these are the trends we are betting on. Artificial General Intelligence may gain major traction in research. We will turn to AI enabled solution to solve mission-critical problems. Machine Learning adoption in business will see rapid growth. Safety, ethics, and transparency will become an integral part of AI application design conversations. Mainstream adoption of AI on mobile devices Major research on data efficient learning methods AI personal assistants will continue to get smarter Race to conquer the AI optimized hardware market will heat up further We will see closer AI integration into our everyday lives. The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Advancements in AI and Quantum Computing will share a symbiotic relationship Deep learning will continue to play a significant role in AI development progress. AI will be on both sides of the cybersecurity challenge. Augmented reality content will be brought to smartphones. Reinforcement learning will be applied to a large number of real-world situations. Robotics development will be powered by Deep Reinforcement learning and Meta-learning A rise in immersive media experiences enabled by AI. A large number of organizations will use Digital Twin Without further ado, let’s dive straight into why we think these trends are important. 10. Neural AI: Deep learning will continue to play a significant role in AI progress. Talking about AI is incomplete without mentioning Deep learning. 2017 saw a wide variety of deep learning applications emerge in diverse areas from Self-driving cars, to Beating Video Games and Go champions, to Dreaming, to Painting pictures, and making scientific discoveries. The year started with Pytorch posing a real challenge to Tensorflow, especially in research. Tensorflow countered it by releasing dynamic computation graphs in Tensorflow Fold. As deep learning frameworks became more user-friendly and accessible, and the barriers for programmers and researchers to use deep learning lowered, it increased developer acceptance. This trend will continue to grow in 2018. There would also be improvements in designing and tuning deep learning networks and for this, techniques such as automated hyperparameter tuning will be used widely. We will start seeing real-world uses of automated machine learning development popping up. Deep learning algorithms will continue to evolve around unsupervised and generative learning to detect features and structure in data. We will see high-value use cases of neural networks beyond image, audio, or video analysis such as for advanced text classification, musical genre recognition, biomedical image analysis etc. 2017 also saw ONNX standardization of neural network representations as an important and necessary step forward to interoperability. This will pave way for deep learning models to become more transparent i.e., start making it possible to explain their predictions, especially when the outcomes of these models are used to influence or inform human decisions. 2017 saw a large portion of deep learning research dedicated to GANs. In 2018, We should see implementations of some of GANs ideas, in real-world use cases such as in cyber threat detection. 2018 may also see more deep learning methods gain Bayesian equivalents and probabilistic programming languages to start incorporating deep learning. 11. Autodidact AI: Reinforcement learning will be applied to a large number of real-world situations. Reinforcement learning systems learn by interacting with the environment through observations, actions, and rewards. The historic victory of AlphaGo, this year, was a milestone for reinforcement learning techniques. Although the technique has existed for decades, the idea to combine it with neural networks to solve complex problems (such as the game of Go) made it widely popular. In 2018, we will see reinforcement learning used in real-world situations. We will also see the development of several simulated environments to increase the progress of these algorithms. A notable fact about reinforcement learning algorithms is that they are trained via simulation, which eliminates the need for labeled data entirely. Given such advantages, we can see solutions which combine Reinforcement Learning and agent-based simulation in the coming year. We can expect to see more algorithms and bots enabling edge devices to learn on their own, especially in IoT environments. These bots will push the boundaries between AI techniques such as reinforcement learning, unsupervised learning and auto-generated training to learn on their own. 12. Gray Hat AI: AI will be on both sides of the cybersecurity challenge. 2017 saw some high-profile cases of ransomware attack, the most notable being WannaCry. Cybercrime is projected to cause $6 trillion in damages by 2021. Companies now need to respond better and faster to these security breaches. Since hiring and training and reskilling people is time-consuming and expensive, companies are turning to AI to automate tasks and detect threats. 2017 saw a variety of AI in cyber sec releases. From Watson AI helping companies stay ahead of hackers and cybersecurity attacks, to Darktrace—a company by Cambridge university mathematicians—which uses AI to spot patterns and prevent cyber crimes before they occur. In 2018 we may see AI being used for making better predictions about never seen before threats. We may also hear about AI being used to prevent a complex cybersecurity attack or the use of AI in incident management. On the research side, we can expect announcements related to securing IoT. McAfee has identified five cybersecurity trends for 2018 relating to Adversarial Machine Learning, Ransomware, Serverless Apps, Connected Home Privacy, and Privacy of Child-Generated Content. 13. AI in Robotics: Robotics development will be powered by Deep Reinforcement learning and Meta-learning Deep reinforcement learning was seen in a new light, especially in the field of robotics after Pieter Abbeel’s fantastic Keynote speech at NIPS 2017.  It talked about the implementation of Deep Reinforcement Learning (DRL) in Robotics, what challenges exist and how these challenges can be overcome. DRL has been widely used to play games (Alpha Go and Atari). In 2018, deep reinforcement learning will be used to instill more human-like qualities of discernment and complex decision-making in robots. Meta-learning was another domain which gained widespread attention in 2017. We Started with  model-agnostic meta-learning, which addresses the problem of discovering learning algorithms that generalize well from very few examples. Later in the year, more research on meta-learning for few shot learning was published, using deep temporal convolutional networks and, graph neural networks among others. We're also now seeing meta-learn approaches that learn to do active learning, cold-start item recommendation, reinforcement learning, and many more. More research and real-world implementations of these algorithms will happen in 2018. 2018 may also see developments to overcome the Meta-learning challenge of requiring more computing power so that it can be successfully applied to the field of robotics. Apart from these, there would be improvements in significant other challenges such as safe learning, and value alignment for AI in robotics. 14. AI Dapps: Within the developer community, the cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Blockchain is expected to be the storehouse for 10% of the world GDP by 2025.  With such a high market growth, Amazon announced the AWS Blockchain Partners Portal to support customers’ integration of blockchain solutions with systems built on AWS. Following Amazon’s announcement, more tech companies are expected to launch such solutions in the coming year.  Blockchain in combination with AI will provide a way for maintaining immutability in a blockchain network creating a secure ecosystem for transactions and data exchange. AI BlockChain is a digital ledger that maximizes security while remaining immutable by employing AI agents that govern the chain. And 2018, will see more such security solutions coming up. A drawback of blockchain is that blockchain mining requires a high amount of energy.  Google’s DeepMind has already proven that AI can help in optimizing energy consumption in data centers. Similar results can be achieved for blockchain as well. For example, Ethereum has come up with proof of stake, a set of algorithms which selects validators based in part on the size of their respective monetary deposits instead of rewarding participants for spending computational resources, thus saving energy. Research is also expected in the area of using AI to reduce the network latency to enable faster transactions. 15. Quantum AI: Convergence of AI in Quantum Computing Quantum computing was called one of the three path-breaking technologies that will shape the world in the coming years by Microsoft CEO, Satya Nadella. 2017 began with Google unveiling a blueprint for quantum supremacy. IBM edged past them by developing a quantum computer capable of handling 50 qubits. Then came, Microsoft with their Quantum Development Kit and a new quantum programming language. The year ended with Rigetti Computing, a startup, announcing a new quantum algorithm for unsupervised machine learning. 2018 is expected to bring in more organizations, new and old, competing to develop a quantum computer with the capacity to handle even more qubits and process data-intensive large-scale algorithms at speeds never imagined before. As more companies successfully build quantum computers, they would also use them for making substantial progress on current efforts in AI development and for finding new areas of scientific discovery. As with Rigetti, new quantum algorithms would be developed to solve complex machine learning problems. We can also see tools, languages, and frameworks such as Microsoft's Q# programming language being developed to facilitate quantum app development. 16. AI doppelgangers: A large number of organizations will use Digital Twin Digital twin, as the name suggests, is a virtual replica of a product, process or service. 2017 saw some major work going in the field of Digital twin. The most important being GE, which now has over 551,000 digital twins built on their Predix platform. SAP expanded their popular IoT platform, SAP Leonardo with a new digital twin offering. Gartner has named Digital Twin as one of the top 10 Strategic Technology Trends for 2018. Following this news, we can expect to see more organizations coming up with their own digital twins. First to, monitor and control assets, to reduce asset downtime, lower the maintenance costs and improve efficiency. And later to organize and envision more complex entities, such as cities or even human beings. These Digital twins will be infused with AI capabilities to enable advanced simulation, operation, and analysis over the digital representations of physical objects. 2018 is expected to have digital twins make steady progress and benefit city architects, digital marketers, healthcare professionals and industrial planners. 17. Experiential AI: Rise in immersive media experiences based on Artificial Intelligence. 2017 saw the resurgence of Virtual Reality thanks to advances made in AI. Facebook unveiled a standalone headset, Oculus Go, to go on sale in early 2018. Samsung added a separate controller to its Gear VR, and Google's Daydream steadily improved from the remains of Google Cardboard. 2018 will see virtual reality the way 2017 saw GANs -  becoming an accepted convention with impressive use cases but not fully deployed at a commercial scale. It won’t be limited to just creating untethered virtual reality headgears, but will also combine the power of virtual reality, artificial intelligence, and conversational platforms to build a uniquely-immersive experience. These immersive technologies will come out of conventional applications(read the gaming industry) to be used in real estate industry, travel & hospitality industry, and other segments. Intel is reportedly working on is a VR set dedicated to sports events. It allows a viewer to experience the basketball game from any seats they choose. It uses AI and big data to analyze different games happening at the same time, so they can switch to watch them immediately. Not only that, Television will start becoming a popular source of immersive experiences. The next-gen televisions will be equipped with high definition cameras, as well as AI technology to analyze a viewer's emotions as they watch shows. 18. AR AI: Augmented reality content will be brought to smartphones Augmented Reality first garnered worldwide attention with the release of Pokemon Go. Following which a large number of organizations invested in the development of AR-enabled smartphones in 2017. Most notable was Apple’s ARKit framework, which allowed developers to create augmented reality experiences for iPhone and iPad.  Following which Google launched ARCore, to create augmented reality experiences at Android scale. Then came Snapchat, which released Lens Studio, a tool for creating customizable AR effects. The latest AR innovation came from Facebook, which launched AR Studio in open beta to bring AR into the everyday life of its users through the Facebook camera. For 2018, they are planning to develop 3D digital objects for people to place onto surfaces and interact within their physical space. 2018 will further allow us to get a taste of augmented reality content through beta products set in the context of our everyday lives. A recent report published by Digi-Capital suggests that mobile AR market will be worth an astonishing $108 billion by 2021. Following this report, more e-commerce websites will engage mobile users using some form of AR content seeking inspiration from the likes of the Ikea Place AR app. Apart from these, more focus would be on building apps and frameworks which consumes less battery life and have high mobile connectivity capability. With this, we complete our list of our 18 in 18’ AI trends to watch.  We would love to know which of our AI-driven prediction surprises you the most and the trends which you agree with. Please feel free to leave a comment below with your views. Happy New Year!
Read more
  • 0
  • 0
  • 3090

article-image-18-striking-ai-trends-2018-part-1
Sugandha Lahoti
27 Dec 2017
14 min read
Save for later

18 striking AI Trends to watch in 2018 - Part 1

Sugandha Lahoti
27 Dec 2017
14 min read
Artificial Intelligence is the talk of the town. It has evolved past merely being a buzzword in 2016, to be used in a more practical manner in 2017. As 2018 rolls out, we will gradually notice AI transitioning into a necessity. We have prepared a detailed report, on what we can expect from AI in the upcoming year. So sit back, relax, and enjoy the ride through the future. (Don’t forget to wear your VR headgear! ) Here are 18 things that will happen in 2018 that are either AI driven or driving AI: Artificial General Intelligence may gain major traction in research. We will turn to AI enabled solution to solve mission-critical problems. Machine Learning adoption in business will see rapid growth. Safety, ethics, and transparency will become an integral part of AI application design conversations. Mainstream adoption of AI on mobile devices Major research on data efficient learning methods AI personal assistants will continue to get smarter Race to conquer the AI optimized hardware market will heat up further We will see closer AI integration into our everyday lives. The cryptocurrency hype will normalize and pave way for AI-powered Blockchain applications. Advancements in AI and Quantum Computing will share a symbiotic relationship Deep learning will continue to play a significant role in AI development progress. AI will be on both sides of the cybersecurity challenge. Augmented reality content will be brought to smartphones. Reinforcement learning will be applied to a large number of real-world situations. Robotics development will be powered by Deep Reinforcement learning and Meta-learning Rise in immersive media experiences enabled by AI. A large number of organizations will use Digital Twin. 1. General AI: AGI may start gaining traction in research. AlphaZero is only the beginning. 2017 saw Google’s AlphaGo Zero (and later AlphaZero) beat human players at Go, Chess, and other games. In addition to this, computers are now able to recognize images, understand speech, drive cars, and diagnose diseases better with time. AGI is an advancement of AI which deals with bringing machine intelligence as close to humans as possible. So, machines can possibly do any intellectual task that a human can! The success of AlphaGo covered one of the crucial aspects of AGI systems—the ability to learn continually, avoiding catastrophic forgetting. However, there is a lot more to achieving human-level general intelligence than the ability to learn continually. For instance, AI systems of today can draw on skills it learned on one game to play another. But they lack the ability to generalize the learned skill. Unlike humans, these systems do not seek solutions from previous experiences. An AI system cannot ponder and reflect on a new task, analyze its capabilities, and work out how best to apply them. In 2018, we expect to see advanced research in the areas of deep reinforcement learning, meta-learning, transfer learning, evolutionary algorithms and other areas that aid in developing AGI systems. Detailed aspects of these ideas are highlighted in later points. We can indeed say, Artificial General Intelligence is inching closer than ever before and 2018 is expected to cover major ground in that direction. 2. Enterprise AI: Machine Learning adoption in enterprises will see rapid growth. 2017 saw a rise in cloud offerings by major tech players, such as the Amazon Sagemaker, Microsoft Azure Cloud, Google Cloud Platform, allowing business professionals and innovators to transfer labor-intensive research and analysis to the cloud. Cloud is a $130 billion industry as of now, and it is projected to grow.  Statista carried out a survey to present the level of AI adoption among businesses worldwide, as of 2017.  Almost 80% of the participants had incorporated some or other form of AI into their organizations or planned to do so in the future. Source: https://www.statista.com/statistics/747790/worldwide-level-of-ai-adoption-business/ According to a report from Deloitte, medium and large enterprises are set to double their usage of machine learning by the end of 2018. Apart from these, 2018 will see better data visualization techniques, powered by machine learning, which is a critical aspect of every business.  Artificial intelligence is going to automate the cycle of report generation and KPI analysis, and also, bring in deeper analysis of consumer behavior. Also with abundant Big data sources coming into the picture, BI tools powered by AI will emerge, which can harness the raw computing power of voluminous big data for data models to become streamlined and efficient. 3. Transformative AI: We will turn to AI enabled solutions to solve mission-critical problems. 2018 will see the involvement of AI in more and more mission-critical problems that can have world-changing consequences: read enabling genetic engineering, solving the energy crisis, space exploration, slowing climate change, smart cities, reducing starvation through precision farming, elder care etc. Recently NASA revealed the discovery of a new exoplanet, using data crunched from Machine learning and AI. With this recent reveal, more AI techniques would be used for space exploration and to find other exoplanets. We will also see the real-world deployment of AI applications. So it will not be only about academic research, but also about industry readiness. 2018 could very well be the year when AI becomes real for medicine. According to Mark Michalski, executive director, Massachusetts General Hospital and Brigham and Women’s Center for Clinical Data Science, “By the end of next year, a large number of leading healthcare systems are predicted to have adopted some form of AI within their diagnostic groups.”  We would also see the rise of robot assistants, such as virtual nurses, diagnostic apps in smartphones, and real clinical robots that can monitor patients, take care of the elderly, alert doctors, and send notifications in case of emergency. More research will be done on how AI enabled technology can help in difficult to diagnose areas in health care like mental health, the onset of hereditary diseases among others. Facebook's attempt at detection of potential suicidal messages using AI is a sign of things to come in this direction. As we explore AI enabled solutions to solve problems that have a serious impact on individuals and societies at large, considering the ethical and moral implications of such solutions will become central to developing them, let alone hard to ignore. 4. Safe AI: Safety, Ethics, and Transparency in AI applications will become integral to conversations on AI adoption and app design. The rise of machine learning capabilities has also given rise to forms of bias, stereotyping and unfair determination in such systems. 2017 saw some high profile news stories about gender bias, object recognition datasets like MS COCO, to racial disparities in education AI systems. At NIPS 2017, Kate Crawford talked about bias in machine learning systems which resonated greatly with the community and became pivotal to starting conversations and thinking by other influencers on how to address the problems raised.  DeepMind also launched a new unit, the DeepMind Ethics & Society,  to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI for the benefit of all. Independent bodies like IEEE also pushed for standards in it’s ethically aligned design paper. As news about the bro culture in Silicon Valley and the lack of diversity in the tech sector continued to stay in the news all of 2017, it hit closer home as the year came to an end, when Kristian Lum, Lead Statistician at HRDAG, described her experiences with harassment as a graduate student at prominent stat conferences. This has had a butterfly effect of sorts with many more women coming forward to raise the issue in the ML/AI community. They talked about the formation of a stronger code of conduct by boards of key conferences such as NIPS among others. Eric Horvitz, a Microsoft research director, called Lum’s post a "powerful and important report." Jeff Dean, head of Google’s Brain AI unit applauded Lum for having the courage to speak about this behavior. Other key influencers from the ML and statisticians community also spoke in support of Lum and added their views on how to tackle the problem. While the road to recovery is long and machines with moral intelligence may be decades away, 2018 is expected to start that journey in the right direction by including safety, ethics, and transparency in AI/ML systems. Instead of just thinking about ML contributing to decision making in say hiring or criminal justice, data scientists would begin to think of the potential role of ML in the harmful representation of human identity. These policies will not only be included in the development of larger AI ecosystems but also in national and international debates in politics, businesses, and education. 5. Ubiquitous AI: AI will start redefining life as we know it, and we may not even know it happened. Artificial Intelligence will gradually integrate into our everyday lives. We will see it in our everyday decisions like what kind of food we eat, the entertainment we consume, the clothes we wear, etc.  Artificially intelligent systems will get better at complex tasks that humans still take for granted, like walking around a room and over objects. We’re going to see more and more products that contain some form of AI enter our lives. AI enabled stuff will become more common and available. We will also start seeing it in the background for life-altering decisions we make such as what to learn, where to work, whom to love, who our friends are,  whom should we vote for, where should we invest, and where should we live among other things. 6. Embedded AI: Mobile AI means a radically different way of interacting with the world. There is no denying that AI is the power source behind the next generation of smartphones. A large number of organizations are enabling the use of AI in smartphones, whether in the form of deep learning chips, or inbuilt software with AI capabilities. The mobile AI will be a  combination of on-device AI and cloud AI. Intelligent phones will have end-to-end capabilities that support coordinated development of chips, devices, and the cloud. The release of iPhone X’s FaceID—which uses a neural network chip to construct a mathematical model of the user’s face— and self-driving cars are only the beginning. As 2018 rolls out we will see vast applications on smartphones and other mobile devices which will run deep neural networks to enable AI. AI going mobile is not just limited to the embedding of neural chips in smartphones. The next generation of mobile networks 5G will soon greet the world. 2018 is going to be a year of closer collaborations and increasing partnerships between telecom service providers, handset makers, chip markers and AI tech enablers/researchers. The Baidu-Huawei partnership—to build an open AI mobile ecosystem, consisting of devices, technology, internet services, and content—is an example of many steps in this direction. We will also see edge computing rapidly becoming a key part of the Industrial Internet of Things (IIoT) to accelerate digital transformation. In combination with cloud computing, other forms of network architectures such as fog and mist would also gain major traction. All of the above will lead to a large-scale implementation of cognitive IoT, which combines traditional IoT implementations with cognitive computing. It will make sensors capable of diagnosing and adapting to their environment without the need for human intervention. Also bringing in the ability to combine multiple data streams that can identify patterns. This means we will be a lot closer to seeing smart cities in action. 7. Data-sparse AI: Research into data efficient learning methods will intensify 2017 saw highly scalable solutions for problems in object detection and recognition, machine translation, text-to-speech, recommender systems, and information retrieval.  The second conference on Machine Translation happened in September 2017.  The 11th ACM Conference on Recommender Systems in August 2017 witnessed a series of papers presentations, featured keynotes, invited talks, tutorials, and workshops in the field of recommendation system. Google launched the Tacotron 2 for generating human-like speech from text. However, most of these researches and systems attain state-of-the-art performance only when trained with large amounts of data. With GDPR and other data regulatory frameworks coming into play, 2018 is expected to witness machine learning systems which can learn efficiently maintaining performance, but in less time and with less data. A data-efficient learning system allows learning in complex domains without requiring large quantities of data. For this, there would be developments in the field of semi-supervised learning techniques, where we can use generative models to better guide the training of discriminative models. More research would happen in the area of transfer learning (reuse generalize knowledge across domains), active learning, one-shot learning, Bayesian optimization as well as other non-parametric methods.  In addition, researchers and organizations will exploit bootstrapping and data augmentation techniques for efficient reuse of available data. Other key trends propelling data efficient learning research are growing in-device/edge computing, advancements in robotics, AGI research, and energy optimization of data centers, among others. 8. Conversational AI: AI personal assistants will continue to get smarter AI-powered virtual assistants are expected to skyrocket in 2018. 2017 was filled to the brim with new releases. Amazon brought out the Echo Look and the Echo Show. Google made its personal assistant more personal by allowing linking of six accounts to the Google Assistant built into the Home via the Home app. Bank of America unveiled Erica, it’s AI-enabled digital assistant. As 2018 rolls out, AI personal assistants will find its way into an increasing number of homes and consumer gadgets. These include increased availability of AI assistants in our smartphones and smart speakers with built-in support for platforms such as Amazon’s Alexa and Google Assistant. With the beginning of the new year, we can see personal assistants integrating into our daily routines. Developers will build voice support into a host of appliances and gadgets by using various voice assistant platforms. More importantly, developers in 2018 will try their hands on conversational technology which will include emotional sensitivity (affective computing) as well as machine translational technology (the ability to communicate seamlessly between languages). Personal assistants would be able to recognize speech patterns, for instance, of those indicative of wanting help. AI bots may also be utilized for psychiatric counseling or providing support for isolated people.  And it’s all set to begin with the AI assistant summit in San Francisco scheduled on 25 - 26 January 2018. It will witness talks by world's leading innovators in advances in AI Assistants and artificial intelligence. 9. AI Hardware: Race to conquer the AI optimized hardware market will heat up further Top tech companies (read Google, IBM, Intel, Nvidia) are investing heavily in the development of AI/ML optimized hardware. Research and Markets have predicted the global AI chip market will have a growth rate of about 54% between 2017 and 2021. 2018 will see further hardware designs intended to greatly accelerate the next generation of applications and run AI computational jobs. With the beginning of 2018 chip makers will battle it out to determine who creates the hardware that artificial intelligence lives on. Not only that, there would be a rise in the development of new AI products, both for hardware and software platforms that run deep learning programs and algorithms. Also, chips which move away from the traditional one-size-fits-all approach to application-based AI hardware will grow in popularity. 2018 would see hardware which does not only store data, but also transform it into usable information. The trend for AI will head in the direction of task-optimized hardware. 2018 may also see hardware organizations move to software domains and vice-versa. Nvidia, most famous for their Volta GPUs have come up with NVIDIA DGX-1, a software for AI research, designed to streamline the deep learning workflow. More such transitions are expected at the highly anticipated CES 2018. [dropcap]P[/dropcap]hew, that was a lot of writing! But I hope you found it just as interesting to read as I found writing it. However, we are not done yet. And here is part 2 of our 18 AI trends in ‘18. 
Read more
  • 0
  • 0
  • 6176
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-10-machine-learning-tools-to-look-out-for-in-2018
Amey Varangaonkar
26 Dec 2017
7 min read
Save for later

10 Machine Learning Tools to watch in 2018

Amey Varangaonkar
26 Dec 2017
7 min read
2017 has been a wonderful year for Machine Learning. Developing smart, intelligent models has now become easier than ever thanks to the extensive research into and development of newer and more efficient tools and frameworks. While the likes of Tensorflow, Keras, PyTorch and some more have ruled the roost in 2017 as the top machine learning and deep learning libraries, 2018 promises to be even more exciting with a strong line-up of open source and enterprise tools ready to take over - or at least compete with - the current lot. In this article, we take a look at 10 such tools and frameworks which are expected to make it big in 2018. Amazon Sagemaker One of the major announcements in the AWS re:Invent 2017 was the general availability of Amazon Sagemaker - a new framework that eases the building and deployment of machine learning models on the cloud. This service will be of great use to developers who don’t have a deep exposure to machine learning, by giving them a variety of pre-built development environments, based on the popular Jupyter notebook format. Data scientists looking to build effective machine learning systems on AWS and to fine-tune their performance without spending a lot of time will also find this service useful. DSSTNE Yet another offering by Amazon, DSSTNE (popularly called as Destiny) is an open source library for developing machine learning models. It’s primary strength lies in the fact that it can be used to train and deploy recommendation models which work with sparse inputs. The models developed using DSSTNE can be trained to use multiple GPUs, are scalable and are optimized for fast performance. Boasting close to 4000 stars on GitHub, this library is yet another tool to look out for in 2018! Azure Machine Learning Workbench Way back in 2014, Microsoft put Machine Learning and AI capabilities on the cloud by releasing Azure Machine Learning. However, this was strictly a cloud-only service. During the Ignite 2017 conference held in September, Microsoft announced the next generation of Machine Learning on Azure - bringing machine learning capabilities to the organizations through their Azure Machine Learning Workbench. Azure ML Workbench is a cross-platform client which can run on both Windows and Apple machines. It is tailor-made for data scientists and machine learning developers who want to perform their data manipulation and wrangling tasks. Built for scalability, users can get intuitive insights from a broad range of data sources and use them for their data modeling tasks. Neon Way back in 2016, Intel announced their intentions to become a major player in the AI market with the $350 million acquisition of Nervana, an AI startup which had been developing both hardware and software for effective machine learning. With Neon, they now have a fast, high-performance deep learning framework designed specifically to run on top of the recently announced Nervana Neural Network Processor. Designed for ease of use and supporting integration with the iPython notebook, Neon supports training of common deep learning models such as CNN, RNN, LSTM and others. The framework is showing signs of continuous improvement and with over 3000 stars on GitHub, Neon looks set to challenge the major league of deep learning libraries in the years to come. Microsoft DMLT One of the major challenges with machine learning for enterprises is the need to scale out the models quickly, without compromising on the performance while minimising significant resource consumption. Microsoft’s Distributed Machine Learning framework is designed to do just that. Open sourced by Microsoft so that it can receive a much wider support from the community, DMLT allows machine learning developers and data scientists to take their single-machine algorithms and scale them out to build high performance distributed models. DMLT mostly focuses on distributed machine learning algorithms and allows you to perform tasks such as word embedding, sampling, and gradient boosting with ease. The framework does not have support for training deep learning models yet, however, we can expect this capability to be added to the framework very soon. Google Cloud Machine Learning Engine Considered to be Google’s premium machine learning offering, the Cloud Machine Learning Engine allows you to build machine learning models on all kinds of data with relative ease. Leveraging the popular Tensorflow machine learning framework, this platform can be used to perform predictive analytics at scale. It also lets you fine-tune and optimize the performance of your machine learning models using the popular HyperTune feature. With a serverless architecture supporting automated monitoring, provisioning and scaling, the Machine Learning Engine ensures you only have to worry about the kind of machine learning models you want to train. This feature is especially useful for machine learning developers looking to build large-scale models on the go. Apple Core ML Developed by Apple to help iOS developers build smarter applications, the Core ML framework is what makes Siri smarter. It takes advantage of both CPU and GPU capabilities to allow the developers to build different kinds of machine learning and deep learning models, which can then be integrated seamlessly into the iOS applications. Core ML supports all popularly used machine learning algorithms such as decision trees, Support Vector Machines, linear models and more. Targeting a variety of real-world use-cases such as natural language processing, computer vision and more, Core ML’s capabilities make it possible to analyze data on the Apple devices on the go, without having to import to the models for learning. Apple Turi Create In many cases, the iOS developers want to customize the machine learning models they want to integrate into their apps. For this, Apple has come up with Turi Create. This library allows you to focus on the task at hand rather than deciding which algorithm to use. You can be flexible in terms of the data set, the scale at which the model needs to operate and what platform the models need to be deployed to. Turi Create comes in very handy for building custom models for recommendations, image processing, text classification and many more tasks. All you need is some knowledge of Python to get started! Convnetjs Move over supercomputers and clusters of machines, deep learning is well and truly here - on your web browsers! You can now train your advanced machine learning and deep learning models directly on your browser, without needing a CPU or a GPU, using the popular Javascript-based Convnetjs library. Originally written by Andrej Karpathy, the current director of AI at Tesla, the library has since been open sourced and extended by the contributions of the community. You can easily train deep neural networks and even reinforcement learning models on your browser directly, powered by this very unique and useful library. This library is suited for those who do not wish to purchase serious hardware for training computationally-intensive models. With close to 9000 stars on GitHub, Convnetjs has been one of the rising stars in 2017 and is quickly becoming THE go-to library for deep learning. BigML BigML is a popular machine learning company that provides an easy to use platform for developing machine learning models. Using BigML’s REST API, you can seamlessly train your machine learning models on their platform. It allows you to perform different tasks such as anomaly detection, time series forecasting, and build apps that perform real-time predictive analytics. With BigML, you can deploy your models on-premise or on the cloud, giving you the flexibility of selecting the kind of environment you need to run your machine learning models. True to their promise, BigML really do make ‘machine learning beautifully simple for everyone’. So there you have it! With Microsoft, Amazon, and Google all fighting for supremacy in the AI space, 2018 could prove to be a breakthrough year for developments in Artificial Intelligence. Add to this mix the various open source libraries that aim to simplify machine learning for the users, and you get a very interesting list of tools and frameworks to keep a tab on. The exciting thing about all this is - all of them possess the capability to become the next TensorFlow and cause the next AI disruption.  
Read more
  • 0
  • 0
  • 3635

article-image-hitting-the-right-notes-in-2017-ai-song-for-data-scientists
Aarthi Kumaraswamy
26 Dec 2017
3 min read
Save for later

Hitting the right notes in 2017: AI in a song for Data Scientists

Aarthi Kumaraswamy
26 Dec 2017
3 min read
A lot, I mean lots and lots of great articles have been written already about AI’s epic journey in 2017. They all generally agree that 2017 sets the stage for AI in very real terms.  We saw immense progress in academia, research and industry in terms of an explosion of new ideas (like capsNets), questioning of established ideas (like backprop, AI black boxes), new methods (Alpha Zero’s self-learning), tools (PyTorch, Gluon, AWS SageMaker), and hardware (quantum computers, AI chips). New and existing players gearing up to tap into this phenomena even as they struggled to tap into the limited talent pool at various conferences and other community hangouts. While we have accelerated the pace of testing and deploying some of those ideas in the real world with self-driving cars, in media & entertainment, among others, progress in building a supportive and sustainable ecosystem has been slow. We also saw conversations on AI ethics, transparency, interpretability, fairness, go mainstream alongside broader contexts such as national policies, corporate cultural reformation setting the tone of those conversations. While anxiety over losing jobs to robots keeps reaching new heights proportional to the cryptocurrency hype, we saw humanoids gain citizenship, residency and even talk about contesting in an election! It has been nothing short of the stuff, legendary tales are made of: struggle, confusion, magic, awe, love, fear, disgust, inspiring heroes, powerful villains, misunderstood monsters, inner demons and guardian angels. And stories worth telling must have songs written about them! Here’s our ode to AI Highlights in 2017 while paying homage to an all-time favorite: ‘A few of my favorite things’ from Sound of Music. Next year, our AI friends will probably join us behind the scenes in the making of another homage to the extraordinary advances in data science, machine learning, and AI. [box type="shadow" align="" class="" width=""] Stripes on horses and horsetails on zebras Bright funny faces in bowls full of rameN Brown furry bears rolled into pandAs These are a few of my favorite thinGs   TensorFlow projects and crisp algo models Libratus’ poker faces, AlphaGo Zero’s gaming caboodles Cars that drive and drones that fly with the moon on their wings These are a few of my favorite things   Interpreting AI black boxes, using Python hashes Kaggle frenemies and the ones from ML MOOC classes R white spaces that melt into strings These are a few of my favorite things   When models don’t converge, and networks just forget When I am sad I simply remember my favorite things And then I don’t feel so bad[/box]   PS: We had to leave out many other significant developments in the above cover as we are limited in our creative repertoire. We invite you to join in and help us write an extended version together! The idea is to make learning about data science easy, accessible, fun and memorable!    
Read more
  • 0
  • 0
  • 1427

article-image-data-science-saved-christmas
Aaron Lazar
22 Dec 2017
9 min read
Save for later

How Data Science saved Christmas

Aaron Lazar
22 Dec 2017
9 min read
It’s the middle of December and it’s shivery cold in the North Pole at -20°C. A fat old man sits on a big brown chair, beside the fireplace, stroking his long white beard. His face has a frown on it, quite his unusual self. Mr. Claus quips, “Ruddy mailman should have been here by now! He’s never this late to bring in the li'l ones’ letters.” [caption id="attachment_3284" align="alignleft" width="300"] Nervous Santa Claus on Christmas Eve, he is sitting on the armchair and resting head on his hands[/caption] Santa gets up from his chair, his trouser buttons crying for help, thanks to his massive belly. He waddles over to the window and looks out. He’s sad that he might not be able to get the children their gifts in time, this year. Amidst the snow, he can see a glowing red light. “Oh Rudolph!” he chuckles. All across the living room are pictures of little children beaming with joy, holding their presents in their hands. A small smile starts building and then suddenly, Santa gets a new-found determination to get the presents over to the children, come what may! An idea strikes him as he waddles over to his computer room. Now Mr. Claus may be old on the outside, but on the inside, he’s nowhere close! He recently set up a new rig, all by himself. Six Nvidia GTX Titans, coupled with sixteen gigs of RAM, a 40-inch curved monitor that he uses to keep an eye on who’s being naughty or nice, and a 1000 watt home theater system, with surround sound, heavy on the bass. On the inside, he’s got a whole load of software on the likes of the Python language (not the Garden of Eden variety), OpenCV - his all-seeing eye that’s on the kids and well, Tensorflow et al. Now, you might wonder what an old man is doing with such heavy software and hardware. A few months ago, Santa caught wind that there’s a new and upcoming trend that involves working with tonnes of data, cleaning, processing and making sense of it. The idea of crunching data somehow tickled the old man and since then, the jolly good master tinkerer and his army of merry elves have been experimenting away with data. Santa’s pretty much self-taught at whatever he does, be it driving a sleigh or learning something new. A couple of interesting books he picked up from Packt were, Python Data Science Essentials - Second Edition, Hands-On Data Science and Python Machine Learning, and Python Machine Learning - Second Edition. After spending some time on the internet, he put together a list of things he needed to set up his rig and got them from Amazon. [caption id="attachment_3281" align="alignright" width="300"] Santa Claus is using a laptop on the top of a house[/caption] He quickly boots up the computer and starts up Tensorflow. He needs to come up with a list of probable things that each child would have wanted for Christmas this year. Now, there are over 2 billion children in the world and finding each one’s wish is going to be more than a task! But nothing is too difficult for Santa! He gets to work, his big head buried in his keyboard, his long locks falling over his shoulder. So, this was his plan: Considering that the kids might have shared their secret wish with someone, Santa plans to tackle the problem from different angles, to reach a higher probability of getting the right gifts: He plans to gather email and Social Media data from all the kids’ computers - all from the past month It’s a good thing kids have started owning phones at such an early age now - he plans to analyze all incoming and outgoing phone calls that have happened over the course of the past month He taps into every country's local police department’s records to stream all security footage all over the world [caption id="attachment_3288" align="alignleft" width="300"] A young boy wearing a red Christmas hat and red sweater is writing a letter to Santa Claus. The child is sitting at a wooden table in front of a Christmas tree.[/caption] If you’ve reached till here, you’re probably wondering whether this article is about Mr.Claus or Mr.Bond. Yes, the equipment and strategy would have fit an MI6 or a CIA agent’s role. You never know, Santa might just be a retired agent. Do they ever retire? Hmm! Anyway, it takes a while before he can get all the data he needs. He trusts Spark to sort this data in order, which is stored in a massive data center in his basement (he’s a bit cautious after all the news about data breaches). And he’s off to work! He sifts through the emails and messages, snorting from time to time at some of the hilarious ones. Tensorflow rips through the data, picking out keywords for Santa. It takes him a few hours to get done with the emails and social media data alone! By the time he has a list, it’s evening and time for supper. Santa calls it a day and prepares to continue the next day. The next day, Santa gets up early and boots up his equipment as he brushes and flosses. He plonks himself in the huge swivel chair in front of the monitor, munching on freshly baked gingerbread. He starts tapping into all the phone company databases across the world, fetching all the data into his data center. Now, Santa can’t afford to spend the whole time analyzing voices himself, so he lets Tensorflow analyze voices and segregate the keywords it picks up from the voice signals. Every kid’s name to a possible gift. Now there were a lot of unmentionable things that got linked to several kids names. Santa almost fell off his chair when he saw the list. “These kids grow up way too fast, these days!” It’s almost 7 PM in the evening when Santa realizes that there’s way too much data to process in a day. A few days later, Santa returns to his tech abode, to check up on the progress of the call data processing. There’s a huge list waiting in front of him. He thinks to himself, “This will need a lot of cleaning up!” He shakes his head thinking, I should have started with this! He now has to munge through that camera footage! Santa had never worked on so much data before so he started to get a bit worried that he might be unable to analyze it in time. He started pacing around the room trying to think up a workaround. Time was flying by and he still did not know how to speed up the video analyses. Just when he’s about to give up, the door opens and Beatrice walks in. Santa almost trips as he runs to hug his wife! Beatrice is startled for a bit but then breaks into a smile. “What is it dear? Did you miss me so much?” Santa replies, “You can’t imagine how much! I’ve been doing everything on my own and I really need your help!” Beatrice smiles and says, “Well, what are we waiting for? Let’s get down to it!” Santa explains the problem to Beatrice in detail and tells her how far he’s reached in the analysis. Beatrice thinks for a bit and asks Santa, “Did you try using Keras on top of TensorFlow?” Santa, blank for a minute, nods his head. Beatrice continues, “Well from my experience, Keras gives TensorFlow a boost of about 10%, which should help quicken the analysis. Santa just looks like he’s made the best decision marrying Beatrice and hugs her again! “Bea, you’re a genius!” he cries out. “Yeah, and don’t forget to use Matplotlib!” she yells back as Santa hurries back to his abode. He’s off to work again, this time saddling up Keras to work on top of TensorFlow. Hundreds and thousands of terabytes of video data flowing into the machines. He channels the output through OpenCV and ties it with TensorFlow to add a hint of Deep Learning. He quickly types out some Python scripts to integrate both the tools to create the optimal outcome. And then the wait begins. Santa keeps looking at his watch every half hour, hoping that the processing happens fast. The hardware has begun heating up quite a bit and he quickly races over to bring a cooler that’s across the room. While he waits for the videos to finish up, he starts working on sifting out the data from the text and audio. He remembers what Beatrice said and uses Matplotlib to visualize it. Soon he has a beautiful map of the world with all the children’s names and their possible gifts beside. Three days later, the video processing gets done Keras truly worked wonders for TensorFlow! Santa now has another set of data to help him narrow down the gift list. A few hours later he’s got his whole list visualized on Matplotlib. [caption id="attachment_3289" align="alignleft" width="300"] Santa Claus riding on sleigh with gift box against snow falling on fir tree forest[/caption] There’s one last thing left to do! He suits up in red and races out the door to Rudolph and the other reindeer, unties them from the fence and leads them over to the sleigh. Once they’re fastened, he loads up an empty bag onto the sleigh and it magically gets filled up. He quickly checks it to see if all is well and they’re off! It’s Christmas morning and all the kids are racing out of bed to rip their presents open! There are smiles all around and everyone’s got a gift, just as the saying goes! Even the ones who’ve been naughty have gotten gifts. Back in the North Pole, the old man is back in his abode, relaxing in an easy chair with his legs up on the table. The screen in front of him runs real-time video feed of kids all over the world opening up their presents. A big smile on his face, Santa turns to look out the window at the glowing red light amongst the snow, he takes a swig of brandy from a hip flask. Thanks to Data Science, this Christmas is the merriest yet!
Read more
  • 0
  • 0
  • 3767

article-image-nips-2017-learning-state-representations-yael-niv
Amarabha Banerjee
18 Dec 2017
6 min read
Save for later

NIPS 2017 Special: Decoding the Human Brain for Artificial Intelligence to make smarter decisions

Amarabha Banerjee
18 Dec 2017
6 min read
Yael Niv is an Associate Professor of Psychology at the Princeton Neuroscience Institute since 2007. Her preferred areas of research include human and animal reinforcement learning and decision making. At her Niv lab, she studies day-to-day processes that animals and humans use to learn by trial and error, without explicit instructions given. In order to predict future events and to act upon the current environment so as to maximize reward and minimize the damage. Our article aims to deliver key points from Yael Niv’s keynote presentation at NIPS 2017. She talks about the ability of Artificial Intelligence systems to perform simple human-like tasks effectively using State representations in the human brain. The talk also deconstructs the complex human decision-making process. Further, we explore how a human brain breaks down complex procedures into simple states and how these states determine our decision-making capabilities.This, in turn, gives valuable insights into the design and architecture of smart AI systems with decision-making capabilities. Staying Simple is Complex What do you think happens when a human being crosses a road, especially when it’s a busy street and you constantly need to keep an eye on multiple checkpoints in order to be safe and sound? The answer is quite ironical. The human brain breaks down the complex process into multiple simple blocks. The blocks can be termed as states - and these states then determine decisions such as when to cross the road or at what speed to cross the road. In other words, the states can be anything - from determining the incoming traffic density to maintaining the calculation of your walking speed. These states help the brain to ignore other spurious or latent tasks in order to complete the priority task at hand. Hence, the computational power of the brain is optimized. The human brain possesses the capability to focus on the most important task at hand and then breaks it down into multiple simple tasks. The process of making smarter AI systems with complex decision-making capabilities can take inspiration from this process. The Practical Human Experiment To observe how the human brain behaves when urged to draw complex decisions, a few experiments were performed. The primary objective of these experiments was to verify the hypothesis that the decision making information in the human brain is stored in a part of the frontal brain called as Orbitofrontal cortex. The two experiments performed are described in brief below: Experiment 1 The participants were given sets of circles at random and they were asked to guess the number of circles in the cluster within 2 minutes. After they guessed the first time, the experimenter disclosed the correct number of circles. Then the subjects were further given a cluster of circles in two different colors (red and yellow) to repeat the guessing activity for each cluster. However, the experimenter never disclosed the fact that they will be given different colored clusters next. Observation: The most important observation derived from the experiment was that after the subject knew the correct count, their guesses revolved around that number irrespective of whether that count mattered for the next set of circle clusters given. That is, the count had actually changed for the two color specimens given to them. The important factor here is that the participants were not told that color would be a parameter to determine the number of circles in each set and still it played a huge part in guessing the number of circles in each set. This way it acted as a latent factor, which was present in the subconscious of the participants and was not a direct parameter. And, this being a latent factor was not in the list of parameters which played an important in determining the number of circles. But still, it played an important part in changing the overall count which was significantly higher for the red color than for the yellow color cluster. Hence, the experiment proved the hypothesis that latent factors are an integral part of intelligent decision-making capabilities in human beings. Experiment 2 The second experiment was performed to ascertain the hypothesis that the Orbitofrontal cortex contains all the data to help the human brain make complex decisions. For this, human brains were monitored using MRI to track the brain activity during the decision making process. In this experiment, the subjects were given a straight line and a dot. They were then asked to predict the next line from the dot - both in terms of line direction and its length. After completing this process for a given number of times, the participants were asked to remember the length and direction of the first line. There was a minor change among the sets of lines and dots. One group had a gradual change in line length and direction and another group had a drastic change in the middle. Observation: The results showed that the group with a gradual change of line length and direction were more helpful in preserving the first data and the one with drastic change was less accurate. The MRI reports showed signs that the classification information was primarily stored in the Orbitofrontal cortex. Hence it is considered as one of the most important parts of the human decision-making process. Shallow Learning with Deep Representations The decision-making capabilities and the effect of latent factors involved in it form the basis of dormant memory in humans. An experiment on rats was performed to explain this phenomenon. In the experiment, 4 rats were given electric shock accompanied by a particular type of sound for a day or two. On the third day, they reacted to the sound even without being given electric shocks. Ivan Pavlov has coined this term as Classical Conditioning theory wherein a relatively permanent change in behavior can be seen as a result of experience or continuous practice. Such instances of conditioning can be deeply damaging, for example in case of PTSD (Post Traumatic Stress Disorder) patients and other trauma victims. In order to understand the process of State representations being stored in memory, the reversal mechanism, i.e how to reverse the process also needs to be understood. For that, three techniques were tested on these rats: The rats were not given any shock but were subjected to the sound The rats were given shocks accompanied by sound at regular intervals and sounds without shock The shocks were slowly reduced in numbers but the sound continued The best results in reversing the memory were observed in case of the third technique, which is known as gradual extinction. In this way, a simple reinforcement learning mechanism is shown to be very effective because it helps in creating simple states which are manageable efficiently and trainable easily. Along with this, if we could extract information from brain imaging data derived from the Orbitofrontal cortex, these simple representational states can shed a lot of light into making complex computational processes simpler and enable us to make smarter AI systems for a better future.
Read more
  • 0
  • 0
  • 2312
article-image-nips-2017-deep-bayesian-bayesian-deep-learning-yee-whye-teh
Savia Lobo
15 Dec 2017
8 min read
Save for later

NIPS 2017 Special: A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh

Savia Lobo
15 Dec 2017
8 min read
Yee Whye Teh is a professor at the department of Statistics of the University of Oxford and also a research scientist at DeepMind. He works on statistical machine learning, focussing on Bayesian nonparametrics, probabilistic learning, and deep learning. The motive of this article aims to bring our readers to Yee’s keynote speech at the NIPS 2017. Yee’s keynote ponders deeply on the interface between two perspectives on machine learning: Bayesian learning and Deep learning by exploring questions like: How can probabilistic thinking help us understand deep learning methods or lead us to interesting new methods? Conversely, how can deep learning technologies help us develop advanced probabilistic methods? For a more comprehensive and in-depth understanding of this novel approach, be sure to watch the complete keynote address by Yee Whye Teh on  NIPS facebook page. All images in this article come from Yee’s presentation slides and do not belong to us. The history of machine learning has shown a growth in both model complexity and in model flexibility. The theory led models have started to lose their shine. This is because machine learning is at the forefront of a revolution that could be called as data led models or the data revolution. As opposed to theory led models, data-led models try not to impose too many assumptions on the processes that have to be modeled and are rather superflexible non-parametric models that can capture the complexities but they require large amount of data to operate.   On the model flexibility side, we have various approaches that have been explored over the years. We have kernel methods, Gaussian processes, Bayesian nonparametrics and now we have deep learning as well. The community has also developed evermore complex frameworks both graphical and programmatic to compose large complex models from simpler building blocks. In the 90’s we had graphical models, later we had probabilistic programming systems, followed by deep learning systems like TensorFlow, Theano, and Torch. A recent addition is probabilistic Torch, which brings together ideas from both the probabilistic Bayesian learning and deep learning. On one hand we have Bayesian learning, which deals with learning as inference in some probabilistic models. On the other hand we have deep learning models, which view learning as optimization functions parametrized by neural networks. In recent years there has been an explosion of exciting research at this interface of these two popular approaches resulting in increasingly complex and exciting models. What is Bayesian theory of learning Bayesian learning describes an ideal learner as one who interacts with the world in order to know its state, which is given by θ. He/she makes some observations about the world by deducing a model in Bayesian context. This model is a joint distribution of both the unknown state of the world θ and the observation about the world x. The model consists of prior distribution and marginal distribution, combining which gives a reverse conditional distribution also known as posterior, which describes the totality of the agent's knowledge about the world after he/she sees x. This posterior can also be used for predicting future observations and act accordingly. Issues associated with Bayesian learning Rigidity Learning can be wrong if model is wrong Not all prior knowledge can be encoded as joint distribution Simple analytic forms are limiting for conditional distributions 2. Scalability: Intractable to compute this posterior and approximations have to be made, which then introduces trade offs between efficiency and accuracy. As a result, it is often assumed that Bayesian techniques are not scalable. To address these issues, the speaker highlights some of his recent projects which showcase scenarios where deep learning ideas are applied to Bayesian models (Deep Bayesian learning) or in the reverse applying Bayesian ideas to Neural Networks ( i.e. Bayesian Deep learning) Deep Bayesian learning: Deep learning assists Bayesian learning Deep learning can improve Bayesian learning in the following ways: Improve the modeling flexibility by using neural networks in the construction of Bayesian models Improve the inference and scalability of these methods by parameterizing the posterior way of using neural networks Empathizing inference over multiple runs These can be seen in the following projects showcased by Yee: Concrete VAEs(Variational Autoencoders) FIVO: Filtered Variational Objectives Concrete VAEs What are VAEs? All the qualities mentioned above, i.e. improving modeling flexibility, improving inference and scalability, and empathizing inference over multiple runs by using neural networks can be seen in a class of deep generative models known as VAE (Variational Autoencoders). Fig: Variational Autoencoders VAEs include latent variables that describe the contents of a scene i.e objects, pose. The relationship between these latent variables and the pixels have to be highly complex and nonlinear. So, in short, VAEs are used to parameterize generative and variable posterior distribution that allows for greater scope flexible modeling. The key that makes VAEs work is the reparameterization trick Fig: Adding reparameterization to VAEs The reparameterization trick is crucial to the continuous latent variables in the VAEs. But many models naturally include discrete latent variables. Yee suggests application of the reparameterization on the discrete latent variables as a work around. This brings us to the concept of Concrete VAEs.. CONtinuous relaxation of disCRETE distributions.Also, the density can be further calculated: This concrete distribution is the reparameterization trick for discrete variables which helps in calculating the KL divergence that is needed for variational inference. FIVO: Filtered Variational Objectives FIVO extends VAEs towards models for sequential and time series data. It is built upon another extension of VAEs known as Importance Weighted Autoencoder, a generative model with a similar as that of the VAE, but which uses a strictly tighter log-likelihood lower bound. Variational lower bound: Rederivation from importance sampling: Better to use multiple samples: Using Importance Weighted Autoencoders we can use multiple sampling, with which we can get a tighter lower bound and optimizing this lower bound should lead to better learning. Let’s have a look at the FIVO objectives: We can use any unbiased estimator p(X) of marginal probabilityTightness of bound related to variance of estimatorFor sequential models, we can use particle filters which produce unbiased estimator of marginal probability. They can also have much lower variance than importance samplers. Bayesian Deep learning: Bayesian approach for deep learning gives us counterintuitive and surprising ways to make deep learning scalable. In order to explore the potential of Bayesian learning with deep neural networks, Yee introduced a project named, The posterior server. The Posterior server The posterior server is a distributed server for deep learning. It makes use of the Bayesian approach in order to make neural networks highly scalable. This project focuses on Distributed learning, where both the data and the computations can be spread across the network. The figure above shows that there are a bunch of workers and each communicates with the parameter server, which effectively maintains the authoritative copy of the parameters of the network. At each iteration, each worker obtains the latest copy of the parameter from the server, computes the gradient update based on its data and sends it back to the server which then updates it to the authoritative copy. So, communications on the network tend to be slower than the computations that can be done on the network. Hence, one might consider multiple gradient steps on each iteration before it sends the accumulated update back to the parameter server. The problem is that the parameter and the worker quickly get out of sync with the authoritative copy on the parameter server. As a result, this leads to stale updates which allow noise into the system and we often need frequent synchronizations across the network for the algorithm to learn in a stable fashion. The main idea here in Bayesian context is that we don't just want a single parameter, we want a whole distribution over them. This will then relax the need for frequent synchronizations across the network and hopefully lead to algorithms that are robust to last frequent communication. Each worker is simply going to construct its own tractable approximation to his own likelihood function and send this information to the posterior server which then combines these approximations together to form the full posterior or an approximation of it. Further, the approximations that are constructed would be based on the statistics of some sampling algorithms that happens locally on that worker. The actual algorithm includes a combination of the variational algorithms, Stochastic Gradient EP and the Markov chain Monte Carlo on the workers themselves. So the variational part in the algorithm handles the communication part in the network whereas the MCMC part handles the sampling part that is posterior to construct the statistics that the variational part needs. For scalability, a stochastic gradient Langevin algorithm which is a simple generalization of the SGT, which includes additional injected noise, to sample from posterior noise. To experiment with this server, it was trained densely connected neural networks with 500 reLU units on MNIST dataset. You can have a detailed understanding of these examples in the keynote video. This interface between Bayesian learning and deep learning is a very exciting frontier. Researchers have brought management of uncertainties within deep learning. Also, flexibility and scalability in Bayesian modeling. Yee concludes with two questions for the audience to think about. Does being Bayesian in the space of functions makes more sense than being Bayesian in the sense of parameters? How to deal with uncertainties under model misspecification?    
Read more
  • 0
  • 0
  • 4303

article-image-nips-2017-special-machine-learning-genomics-bridging-gap-research-clinical-trial-success-brendan-frey
Sugandha Lahoti
14 Dec 2017
10 min read
Save for later

NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey

Sugandha Lahoti
14 Dec 2017
10 min read
Brendan Frey is the founder and CEO of Deep Genomics. He is the professor of engineering and medicine at the University of Toronto. His major work focuses on using machine learning to model genome biology and understand genetic disorders. This article attempts to bring our readers to Brendan’s Keynote speech at NIPS 2017. It highlights how the human genome can be reprogrammed using Machine Learning and gives a glimpse into some of the significant work going on in this field. After reading this article, head over to the NIPS Facebook page for the complete keynote. All images in this article come from Brendan’s presentation slides and do not belong to us. 65% of people in their lifetime are at a risk of acquiring a disease with a genetic basis. 8 million births per year are estimated to have a serious genetic defect. According to the US healthcare system, the lifetime average cost of such a baby is 5M$ per child. These are just some statistics. If we also add the emotional component to this data, it gives us an alarming picture of the state of the healthcare industry today. According to a recent study,  investing in pharma is no longer as lucrative as it used to be in the 90s. Funding for this sector is dwindling, which serves as a barrier to drug discovery, trial, and deployment. All of these, in turn, add to the rising cost of healthcare. Better to stuff your money in a mattress than put it in a pharmaceutical company! Genomics as a field is rich in data. Experts in genomics strive to determine complete DNA sequences and perform genetic mapping to help understand a disease. However, the main problem confronting Genome Biology and Genomics is the inability to decipher information from the human genome i.e. how to convert the genome into actionable information. What genes are made of and why sequencing matter Essentially each gene consists of a promoter region, which basically activates the gene. Following the promoter region, there are alternating Exons and Introns. Introns are almost 10,000 nucleotides long. Exons are relatively short, around 100 nucleotides long. In software terms, you can think of Exons as print statements. Exons are the part that ends up in proteins. Introns get cut out/removed. However, Introns contain crucial control logic. There are words embedded in introns that tells the cells how to cut and paste these exons together and make the gene. A DNA sequence is transcribed into RNA and then the RNA is processed in various ways to translate into proteins. However, the picture is much more complicated than this. Proteins go back and interact with the DNA. Proteins also interact with RNA. even, RNA interacts with protein. So all these entities are interrelated. All these technicalities and interrelationships make biology generally complex for researchers or even a group of researchers to fully understand and make sense of the data. Another way to look at this is, that in the recent years our ability to measure biology (fitbits, tabloids, genomes) and the ability to alter biology (DNA editing) has far surpassed our ability to understand biology. In short, in this field, we have become very good at collecting data but not as good with interpreting it. Machine Learning brought to genomes Deep Genomics, is a genetic medicine company that uses an AI-driven platform to support geneticists, molecular biologists and chemists in the development of genetic therapies. In 2010, Deep Genomics used machine learning to understand how words embedded in introns control print statements splicing which puts exons into proteins. They also used machine learning to reverse engineer to infer those code words using datasets. Another deep genomic research project talks about Protein-DNA binding data. There are datasets which allow you to measure interactions between protein and DNA and understand how that works.  In this research, they took a dataset from Ray et al 2013 which consisted of 240,000 designed sequences and then evaluated which are the proteins that the sequence likes to stick to. Thus generating a big data matrix of proteins and designed sequences. The machine learning task here was to learn to take a sequence and predict whether the protein will bind to that sequence. How was this done? They took batches of data containing the designed sequences and fed it into a convolutional neural network. The CNN swept across those sequences to generate an intermediary representation.  This representation was then fed into different layers of convolutional pooling and fully connected layers produced the output. The output was then compared to the measurement (the data matrix of proteins and designed sequences described earlier ) and backpropagation was used to update the parameters. One of the challenges was figuring out the right metric. For this, they compared the measured binding affinity (how much protein sticks to the sequence) to the output of the neural network and determined the right cos function for producing a neural network that is useful in practice. Usecase One of the use cases of this neural network is to identify the pathological mutation and fix them. The above illustration is a sequence of the cholesterol gene. The researchers artificially in silico looked at every possible mutation in the promoter. So for each nucleotide, say if the nucleotide had a value of A, they switched it to G, C, and T and for each of those possibilities, they ran the entire promoter through a neural network and looked at its output. The neural network then predicted the mutations that will disrupt the protein binding. The heights of the letter showed the measured binding affinity i.e the output of the neural network.  The white box displays how much the mutation changed the output.  Pink or bright red was used in case of positive mutation, blue in case of negative mutation and white for no change. This map was then compared with known results to see the accuracy and also make predictions never seen before in a clinical trial. As shown in the image, Blues, which are the potential or known harmful mutations have correctly fallen in the white spaces. But there are some unknown mutations as well. Machine learning output such as this can help researchers narrow their focus on learning about new diseases and also in diagnosing existing ones and treating them. Another group of researchers used a neural network to figure out the 3D structure or the chromatin interaction structure of the DNA. The data used was a matrix form and showed how strongly two parts of a DNA are likely to interact. The researchers trained a multilayer convolutional network to take as input the raw DNA sequence and also a signal called chromatin accessibility( tells how available the DNA is) and fed it into CNN. The output of that system predicted the probability of contact which is crucial for gene expression. Deep genomics: Using AI to build a new universe of digital medicines The founding belief at Deep Genomics is that the future of medicine will rely on artificial intelligence, because biology is too complex for humans to understand. The goal of deep genomics is to build AI platform for detecting and treating genetic disease. Genome tools Genome processing tools are tools which help in identification of mutation, e.g DeepVariant. At deep genomics, the tool used is called genomic kit which is 20 to 800 times faster than other existing tools. Disease mechanism Prediction This is about figuring whether the disease mechanism is pathological or the mutation which simply changes hair color. Therapeutic Development Helping patients by providing them with better medicines. These are the basics of any drug development procedure. We start with patient genetic data and clinical mutations. Then we find the disease mechanism and figure the mechanism of action( the steps to remediate the problem). However, the disease mechanism and mechanism of action of a potential drug may not be the inverse of one another. The next step is to design a drug. With Digital medicines, if we know the mechanism of action that we are trying to achieve, and we have ML systems, like the ones described earlier, we can simulate the effects of modifying DNA or RNA. Thus we can, in silico design the compound we want to test. Next, we test the experimental work in the wet lab to actually see if it alters the way in which ML systems predicted. The next thing is toxicity or off-target effects. This evaluates if the compound is going to change some other part of the genome or has some unintended consequences. Next, we have clinical trials. In case of clinical trials, the biggest problems facing pharmaceutical companies is patient’s gratification. Then comes the marketing and distribution of that drug which is highly costly. This includes marketing strategies to convince people to buy those drugs, insurance companies to pay for them, and legal companies to deal with litigations. Here’s how long it took Ionis and Biogen, to develop Spinraza, which is a drug for curing Spinal Muscular Atrophy (SMA). It is the most effective drug for curing SMA. It has saved hundreds of lives already. However, it costs 750,000$ per child per year. Why does it cost so much? If we look at the timeline of the development of Spinraza, the initial period of testing was quite long. The goal of deep Genomics is to use ML to accelerate the research period of drugs such as Spinraza from 8 years down to a couple of years. They also aim to use AI to accelerate clinical trials, toxicity studies, and other aspects of drug development. The whole idea is to reduce the amount of time needed to develop the drug. Deep genomics uses AI to automate and accelerate each of these steps and make it fast and accurate. However, apart from AI, they also test compounds at their wet lab in human cells to see if they work. They also use Cloud Laboratory. At cloud lab, they upload a python script. Once uploaded, it specifies the experimental protocols and then robots conduct these experiments. These labs rapidly scale up the ability to do experiments, test compounds, and solve other problems. Earning trust of stakeholders One of the major issues ML systems face in the genomics industry is earning the trust of the stakeholders. These stakeholders include the patients, the physicians treating the patients, the insurance companies paying for the treatments, different technology providers, and the hospitals. Machine learning practitioners are also often criticized for producing black boxes, that are not open to interpretation. The way to gain this trust is to exactly figure out what these stakeholders need.  For this, machine learning systems need to explain the intermediary steps of a prediction. For instance, instead of directly recommending double mastectomy, the system says you have a mutation, the mutation is going to cause splicing to go wrong, leading to malfunctioning protein, which is likely to lead to breast cancer. The likelihood is x%. The road ahead Researchers at Deep Genomics pare currently working primarily on Project Saturn. The idea is to use a Machine learning system to scan a vast space of 69 billion molecules all in silico and identify about a thousand active compounds. Active compounds allow us to manipulate cell biology. Think about it as 1000 control switches which we can turn and twist to adjust what is going inside a cell, a toolkit for therapeutic development. They plan to have 3 compounds in clinical trials within the next 3 years.
Read more
  • 0
  • 0
  • 3705

article-image-nips-2017-6-challenges-deep-learning-robotics-pieter-abbeel
Aaron Lazar
13 Dec 2017
10 min read
Save for later

NIPS 2017 Special: 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel

Aaron Lazar
13 Dec 2017
10 min read
Pieter Abbeel is a professor at UC Berkeley and a former Research Scientist at OpenAI. His current research focuses on robotics and machine learning with particular focus on deep reinforcement learning, deep imitation learning, deep unsupervised learning, meta-learning, learning-to-learn, and AI safety. This article attempts to bring our readers to Pieter’s fantastic Keynote speech at NIPS 2017. It talks about the implementation of Deep Reinforcement Learning in Robotics, what challenges exist and how these challenges can be overcome. Once you’ve been through this article, we’re certain you’d be extremely interested in watching the entire video on the NIPS Facebook page. All images in this article come from his presentation slides and do not belong to us. Robotics in ML has been growing in leaps and bounds with several companies investing huge amounts to tie both these technologies together in the best way possible. Although, there are still several aspects that are not thoroughly accomplished when it comes to AI Robotics. Here are a few of them: Maximize Signal Extracted from Real World Experience Faster/Data efficient Reinforcement Learning Long Horizon Reasoning Taskability (Imitation Learning) Lifelong Learning (Continuous Adaptation) Leverage Simulation Maximise signal extracted from real world experience We need more real world data, so we need to extract as much signal from it. In the diagram below, are the different layers of machine learning that engineers perform. There are engineers who look at the entire cake and train the agent to take both the learning from the reward and from auxiliary signals. This is because using only Reinforcement Learning doesn’t give you a lot of signal. Is there then, a possibility of having a Reward Signal in RL that ties more RL into the system? There’s something known as Hindsight Experience Replay. The idea is to get a reward signal from any experience by assuming the goal equals whatever happened, and not just from success like in usual RL. For this, we need to assume that whatever the agent does is a success. We use Q-learning and instead of a standard Q function, we use multiple goals even though they were not really a goal when you were acting.  Here, a replay buffer collects experience, Q-learning is then applied and a hindsight replay is performed to infuse a new reward for everything the agent has done. For various robotic tasks like pushing, sliding and picking and placing objects, this does very well. Faster Reinforcement Learning When we’re talking about faster RL, we’re talking about much more data efficient RL. Here is a diagram that demonstrates standard RL: An agent lets a robot perform an action in a particular environment or situation in order to achieve a reward. Here, the goal is to maximise the reward. As against Supervised Learning, there is no supervision as to whether the actions taken by the agents are right or wrong. That brings in a few additional challenges in RL, which are: Credit assignment: This is a major problem and is where you get the signal from in RL Stability: Because of the feedback loop, the system could destabilize and destroy itself Exploration: Doing things you’ve never done before when the only way to learn is based on what you’ve done before Despite this, there have been great improvements in Reinforcement Learning in the past few years, enabling AI systems to play games like Go, Dota, etc. It has also been implemented in building robots by NASA for planetary exploration. But the question still exists: “How good is learning?” In the game of pong, a human takes roughly 2 hours to learn what Deep Q-Network (DQN) learns in 40 hours! A more careful study reveals that after 15 minutes, humans tend to outperform DDQN that has trained for 115 hours. This is a tremendous gap in terms of learning efficiency. So, how do we overcome the challenge? Several fully generalised algorithms like Trust Region Policy Optimization (TRPO), DQN, Asynchronous Actor-Critic Agents (A3C) and Rainbow are available, meaning that they can be applied to any kind of environment. Although, only a very small subset of environments are actually encountered in the real world. Can we develop fast RL algorithms that take advantage of this situation? RL Agents can be reused to train various policies. The RL algorithm is developed to train the policy to adapt to a particular environment A. This can then be replicated to environment B and so on. Humans develop the RL algorithm and then rely on it to train the policy. Despite this, none of the algorithms are as good as human learners. Do we have an alternative then? Indeed, yes! Why not let the system learn not just the policy but the algorithm as well or in other words, the entire agent? Enter Meta-Reinforcement Learning In Meta-RL, the learning algorithm itself is being learnt. You could relate this to meta-programming, where one program is trained to write another. This process helps a system learn the world better so it can pick up on learning a new situation quicker. So how does this work? The system is faced with many environments, so that it learns the algorithms and then outputs a faster RL Agent. So, when faced with a new environment, it quickly adapts to it. For evaluating the actual performance, the Multi-armed bandits problem can be considered. Here’s the setting: each bandit has its own distribution over payouts, and in each episode you can choose one bandit. A good RL agent should be able to explore a sufficient number of bandits and exploit the best ones. We need to come up with an algorithm that pulls a higher probability of payoff, rather than a low probability. There are already several asymptotically optimal algorithms like Gittins index, UCB1, Thompson Sampling, that have been created to solve this problem. Here’s a comparison of some of them with the Meta-RL algorithm. The result is quite impressive. The Meta-RL algorithm is equally competitive with Gittins. In a case where the task is to obtain an on target running direction as well as attain the maximum speed, the agent when dropped into an environment is able to master the the task almost instantly. However, meta-learning succeeds only 2/3rd of the time. It doesn’t succeed the rest of the time due to two main reasons. Overfitting: You would usually tend to overfit to the current situation rather than generically fitting to situations Underfitting: This is when you don’t get enough signal to get any rewards The solution is to put a different structure underneath the system. Instead of using an RNN, we use a wavenet like architecture or maybe Simple Neural Attentive Meta-Learner (SNAIL). SNAIL is able to perform a bit better than RL2 in the same Bandits problem. Longer Horizon Reasoning We need to learn to reason over longer horizons than what canonical algorithms do. For this, we need hierarchy. For example, suppose a robot has to perform 10 tasks in a day. This would mean it has 10 timesteps per day? Each of these 10 tasks would have subtasks under them. Let’s assume that would make it a total of 1000 time steps. To perform these tasks, the robot would need footstep planning, which would amount to 100,000 time steps. Footsteps in turn require commands to be sent to motors, which would make it 100,000,000 time steps. This is a very long horizon. We can formulate this as a meta-learning problem. The agent has to solve a distribution of related long-horizon tasks with the goal of learning new tasks in the distribution quickly. If that is our objective, hierarchy would fall out. Taskability (Imitation Learning) There are several things we want from robots. We need to be able to tell them what to do and we can do this by giving them examples. This is called Imitation Learning, which can be successfully implemented to a variety of use cases. The idea is to collect many demonstrations, then train something from those demonstrations, then deploy the learn policy. The problem with this is that everytime there is a new task, you start from scratch. The solution to this problem is experience through several demonstrations, as in the case of humans. Although, instead of running the agent through several demos, it is trained completely on one, then showed a frame of a second demo, where it uses it to predict what the outcome would be. This is known as One-Shot imitation learning which is a part of supervised learning, where in several demonstrations are used to train the system to be able to handle any new environment it is put into. Lifelong learning (Continuous Adaptation) What we usually do in ML can be divided into two broad steps: Run Machine Learning Deploy it, which is a canonical way In this case, all the learning happens ahead of time, before the deployment. However, in real world cases, what you learn from past data might not work in the future. There is a necessity to learn during deployment, which is a lifelong learning spirit. This brings us to Continuous Adaptation. Can we train an agent to be good at non stationary environments? We need to find whether at the time of meta training the agent is able to adapt to a new/changing task. We can try changing the dynamics since it’s hard to do ML training in the real world. At the same time, we can also use competitor environments; which means you’re in an environment with other agents who are trying to beat your agent. The only way to succeed is to continuously adapt more quickly than the others. Leverage Simulation Simulation is very helpful and it’s not that expensive. It’s fast and scalable and lets you label more easily. However, the challenge is how to get useful things out of the simulator. One approach is to build realistic simulators. This is quite expensive. Another way is to use a close enough simulator that uses real world data through domain confusion or adaptation. It allows to learn from a small amount of real world data and is quite successful. Further, another approach to look at is Domain Randomisation, which is also working well in the real world. If the model sees enough simulated variations, the real world might appear like just the next simulator. This has worked in the context of using simulator data to train a quadcopter to avoid collision. Moreover, when pre trained from imagenet or just training in simulation, both performances were similar, after around 8000 examples. To conclude, the beauty of meta learning is that it enables the discovery of algorithms that are data driven, as against those that are created from pure human ingenuity. This requires more compute power, but several companies like Nvidia and Intel are working hard to overcome this challenge. This will surely power meta-learning to great heights to be implemented in robotics. While we figure out these above mentioned technical challenges of incorporating AI in robotics, some significant other challenges that we must focus on in parallel are safe learning, and value alignment among others.    
Read more
  • 0
  • 0
  • 3915
article-image-20-lessons-bias-machine-learning-systems-nips-2017
Aarthi Kumaraswamy
08 Dec 2017
9 min read
Save for later

20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017

Aarthi Kumaraswamy
08 Dec 2017
9 min read
Kate Crawford is a Principal Researcher at Microsoft Research and a Distinguished Research Professor at New York University. She has spent the last decade studying the social implications of data systems, machine learning, and artificial intelligence. Her recent publications address data bias and fairness, and social impacts of artificial intelligence among others. This article attempts to bring our readers to Kate’s brilliant Keynote speech at NIPS 2017. It talks about different forms of bias in Machine Learning systems and the ways to tackle such problems. By the end of this article, we are sure you would want to listen to her complete talk on the NIPS Facebook page. All images in this article come from Kate's presentation slides and do not belong to us. The rise of Machine Learning is every bit as far reaching as the rise of computing itself.  A vast new ecosystem of techniques and infrastructure are emerging in the field of machine learning and we are just beginning to learn their full capabilities. But with the exciting things that people can do, there are some really concerning problems arising. Forms of bias, stereotyping and unfair determination are being found in machine vision systems, object recognition models, and in natural language processing and word embeddings. High profile news stories about bias have been on the rise, from women being less likely to be shown high paying jobs to gender bias and object recognition datasets like MS COCO, to racial disparities in education AI systems. 20 lessons on bias in machine learning systems Interest in the study of bias in ML systems has grown exponentially in just the last 3 years. It has more than doubled in the last year alone. We are speaking different languages when we talk about bias. I.e., it means different things to different people/groups. Eg: in law, in machine learning, in geometry etc. Read more on this in the ‘What is bias?’ section below. In the simplest terms, for the purpose of understanding fairness in machine learning systems, we can consider ‘bias’ as a skew that produces a type of harm. Bias in MLaaS is harder to identify and also correct as we do not build them from scratch and are not always privy to how it works under the hood. Data is not neutral. Data cannot always be neutralized. There is no silver bullet for solving bias in ML & AI systems. There are two main kinds of harms caused by bias: Harms of allocation and harms of representation. The former takes an economically oriented view while the latter is more cultural. Allocative harm is when a system allocates or withholds certain groups an opportunity or resource. To know more, jump to the ‘harms of allocation’ section. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc., they cause representative harm. This is further elaborated in the ‘Harms of representation’ section. Harm can further be classified into five types: stereotyping, recognition, denigration, under-representation and ex-nomination.  There are many technical approaches to dealing with the problem of bias in a training dataset such as scrubbing to neutral, demographic sampling etc among others. But they all still suffer from bias. Eg: who decides what is ‘neutral’. When we consider bias purely as a technical problem, which is hard enough, we are already missing part of the picture. Bias in systems is commonly caused by bias in training data. We can only gather data about the world we have which has a long history of discrimination. So, the default tendency of these systems would be to reflect our darkest biases.  Structural bias is a social issue first and a technical issue second. If we are unable to consider both and see it as inherently socio-technical, then these problems of bias are going to continue to plague the ML field. Instead of just thinking about ML contributing to decision making in say hiring or criminal justice, we also need to think of the role of ML in the harmful representation of human identity. While technical responses to bias are very important and we need more of them, they won’t get us all the way to addressing representational harms to group identity. Representational harms often exceed the scope of individual technical interventions. Developing theoretical fixes that come from the tech world for allocational harms is necessary but not sufficient. The ability to move outside our disciplinary boundaries is paramount to cracking the problem of bias in ML systems. Every design decision has consequences and powerful social implications. Datasets reflect not only the culture but also the hierarchy of the world that they were made in. Our current datasets stand on the shoulder of older datasets building on earlier corpora. Classifications can be sticky and sometimes they stick around longer than we intend them to, even when they are harmful. ML can be deployed easily in contentious forms of categorization that could have serious repercussions. Eg: free-of-bias criminality detector that has Physiognomy at the heart of how it predicts the likelihood of a person being a criminal based on his appearance. What is bias? 14th century: an oblique or diagonal line 16th century: undue prejudice 20th century: systematic differences between the sample and a population In ML: underfitting (low variance and high bias) vs overfitting (high variance and low bias) In Law:  judgments based on preconceived notions or prejudices as opposed to the impartial evaluation of facts. Impartiality underpins jury selection, due process, limitations placed on judges etc. Bias is hard to fix with model validation techniques alone. So you can have an unbiased system in an ML sense producing a biased result in a legal sense. Bias is a skew that produces a type of harm. Where does bias come from? Commonly from Training data. It can be incomplete, biased or otherwise skewed. It can draw from non-representative samples that are wholly defined before use. Sometimes it is not obvious because it was constructed in a non-transparent way. In addition to human labeling, other ways that human biases and cultural assumptions can creep in ending up in exclusion or overrepresentation of subpopulation. Case in point: stop-and-frisk program data used as training data by an ML system.  This dataset was biased due to systemic racial discrimination in policing. Harms of allocation Majority of the literature understand bias as harms of allocation. Allocative harm is when a system allocates or withholds certain groups, an opportunity or resource. It is an economically oriented view primarily. Eg: who gets a mortgage, loan etc. Allocation is immediate, it is a time-bound moment of decision making. It is readily quantifiable. In other words, it raises questions of fairness and justice in discrete and specific transactions. Harms of representation It gets tricky when it comes to systems that represent society but don't allocate resources. These are representational harms. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc. It is a long-term process that affects attitudes and beliefs. It is harder to formalize and track. It is a diffused depiction of humans and society. It is at the root of all of the other forms of allocative harm. 5 types of allocative harms Source: Kate Crawford’s NIPS 2017 Keynote presentation: Trouble with Bias Stereotyping 2016 paper on word embedding that looked at Gender stereotypical associations and the distances between gender pronouns and occupations. Google translate swaps the genders of pronouns even in a gender-neutral language like Turkish   Recognition When a group is erased or made invisible by a system In a narrow sense, it is purely a technical problem. i.e., does a system recognize a face inside an image or video? Failure to recognize someone’s humanity. In the broader sense, it is about respect, dignity, and personhood. The broader harm is whether the system works for you. Eg: system could not process darker skin tones, Nikon’s camera s/w mischaracterized Asian faces as blinking, HP's algorithms had difficulty recognizing anyone with a darker shade of pale. Denigration When people use culturally offensive or inappropriate labels Eg: autosuggestions when people typed ‘jews should’ Under-representation An image search of 'CEOs' yielded only one woman CEO at the bottom-most part of the page. The majority were white male. ex-nomination Technical responses to the problem of biases Improve accuracy Blacklist Scrub to neutral Demographics or equal representation Awareness Politics of classification Where did identity categories come from? What if bias is a deeper and more consistent issue with classification? Source: Kate Crawford’s NIPS 2017 Keynote presentation: Trouble with Bias The fact that bias issues keep creeping into our systems and manifesting in new ways, suggests that we must understand that classification is not simply a technical issue but a social issue as well. One that has real consequences for people that are being classified. There are two themes: Classification is always a product of its time We are currently in the biggest experiment of classification in human history Eg: labeled faces in the wild dataset has 77.5% men, and 83.5% white. An ML system trained on this dataset will work best for that group. What can we do to tackle these problems? Start working on fairness forensics Test our systems: eg: build pre-release trials to see how a system is working across different populations How do we track the life cycle of a training dataset to know who built it and what the demographics skews might be in that dataset Start taking interdisciplinarity seriously Working with people who are not in our field but have deep expertise in other areas Eg: FATE (Fairness Accountability Transparency Ethics) group at Microsoft Research Build spaces for collaboration like the AI now institute. Think harder on the ethics of classification The ultimate question for fairness in machine learning is this. Who is going to benefit from the system we are building? And who might be harmed?
Read more
  • 0
  • 0
  • 8219

article-image-3-ways-use-structures-machine-learning-lise-getoor-nips-2017
Sugandha Lahoti
08 Dec 2017
11 min read
Save for later

3 great ways to leverage Structures for Machine Learning problems by Lise Getoor at NIPS 2017

Sugandha Lahoti
08 Dec 2017
11 min read
Lise Getoor is a professor in the Computer Science Department, at the University of California, Santa Cruz. She has a PhD in Computer Science from Stanford University. She has spent a lot of time studying machine learning, reasoning under uncertainty, databases, data science for social good, artificial intelligence This article attempts to bring our readers to Lisa’s Keynote speech at NIPS 2017. It highlights how structures can be unreasonably effective and the ways to leverage structures in Machine learning problems. After reading this article, head over to the NIPS Facebook page for the complete keynote. All images in this article come from Lisa’s presentation slides and do not belong to us. Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. This Data is Multimodal (it has different kinds of entities), Multi-relational (it has different links between things), and Spatio-Temporal (it involves space and time parameters). This keynote explores how we can exploit the structure that's in the input as well as the output of machine learning algorithms. A large number of structured problems exists in the fields of NLP and computer vision, computational biology, computational social sciences, knowledge graph extraction and so on. According to Dan Roth, all interesting decisions are structured i.e. there are dependencies between the predictions. Most ML algorithms take this nicely structured data and flatten it put it in a matrix form, which is convenient for our algorithms. However, there is a bunch of different issues with it. The most fundamental issue with the matrix form is that it assumes incorrect independence. Further, in the context of structure and outputs, we’re unable to apply the collective reasoning about the predictions we made for different entries in this matrix. Therefore we need to have ways where we can declaratively talk about how to transform the structure into features. This talk provides us with patterns, tools, and templates for dealing with structures in both inputs and outputs.   Lisa has covered three topics for solving structured problems: Patterns, Tools, and Templates. Patterns are used for simple structured problems. Tools help in getting patterns to work and in creating tractable structured problems. Templates build on patterns and tools to solve bigger computational problems. [dropcap]1[/dropcap] Patterns They are used for naively simple structured problems. But on encoding them repeatedly, one can increase performance by 5 or 10%. We use Logical Rules to capture structure in patterns. These logical structures capture structure, i.e. they give an easy way of talking about entities and links between entities. They also tend to be interpretable. There are three basic patterns for structured prediction problems: Collective Classification, Link Prediction, Entity Resolution. [toggle title="To learn more about Patterns, open this section" state="close"] Collective Classification Collective classification is used for inferring the labels of nodes in a graph. The pattern for expressing this in logical rules is [box type="success" align="" class="" width=""]local - predictor (x, l) → label (x, l) label (x, l) & link(x,y) → label (y,l)[/box] It is called as collective classification as the thing to predict i.e. the label, occurs on both sides of the rule. Let us consider a toy problem: We have to predict unknown labels here (marked in grey) as to what political party the unknown person will vote for. We apply logical rules to the problem. Local rules: [box type="success" align="" class="" width=""]“If X donates to part P, X votes for P” “If X tweets party P slogans, X votes for P”[/box] Relational rules: [box type="success" align="" class="" width=""]“If X is linked to Y, and X votes for P, Y votes for P” Votes (X,P) & Friends (X,Y) → Votes (Y, P) Votes (X,P) & Spouse (X,Y) → Votes (Y, P)[/box] The above example shows the local and relational rules applied to the problem based on collective classification. Adding a collective classifier like this to other problems yields significant improvement. Link Prediction Link Prediction is used for predicting links or edges in a graph. The pattern for expressing this in logical rules is : [box type="success" align="" class="" width=""]link (x,y) & similar (y,z) →  link (x,y)[/box] For example, consider a basic recommendation system. We apply logical rules of link prediction to express likes and similarities. So, how you infer one link is gonna give you information about another link. Rules express: [box type="success" align="" class="" width=""]“If user U likes item1, and item2 is similar to item1, user U likes item2” Likes (U, I1) & SimilarItem (I1, I2) → Likes(U, I2) “If user1  likes item I, and user2 is similar to user1, user2 likes item I” Likes (U1, I) & SimilarUser (U1, U2) → Likes(U2, I)[/box] Entity Resolution Entity Resolution is used for determining which nodes refer to the same underlying entity. Here we use local rules between how similar things are, for instance, how similar their names or links are [box type="success" align="" class="" width=""]similar - name (x,y) → same (x,y) similar - links (x,y) → same (x,y)[/box] There are two collective rules. One is based on transitivity. [box type="success" align="" class="" width=""]similar - name (x,y) → same (x,y) similar - links (x,y) → same (x,y) same (x,y) && same(y,z) → same (x,z)[/box] The other is based on matching i.e. dependence on both sides of the rule. [box type="success" align="" class="" width=""]similar - name (x,y) → same (x,y) similar - links (x,y) → same (x,y) same (x,y) & ! same (y,z) → ! same (x,z)[/box] The logical rules as described above though being quite helpful, have certain disadvantages. They are intractable, can’t handle inconsistencies, and can’t represent degrees of similarity.[/toggle] [dropcap]2[/dropcap] Tools Tools help in making the structured kind of problems tractable and in getting patterns to work. The tools come from the Statistical Relational Learning community.  Lise adds another one to this mix of languages - PSL. PSL is probabilistic logical programming, a declarative language for expressing collective inference problems. To know more: psl.linqs.org Predicate = relationship or property Ground Atom = (continuous) random variable Weighted Rules = capture dependency or constraint PSL Program = Rules + Input DB PSL makes reasoning scalable by mapping Logical inference to Convex optimization. The language takes logical rules and assign weights to them and then uses it to define a distribution for the unknown variables. One of the striking features here is that the random variables have continuous values. The work done pertaining to the PSL language turns the disadvantages of logical rules into advantages. So they are tractable, can handle inconsistencies, and can represent similarity. The key idea is to convert the clauses to concave functions. To be tractable, we relax it to a concave maximization. PSL has semantics from three different worlds: Randomized algorithms from the Computer science community, Probabilistic graphical models from the Machine Learning community, and Soft Logic from the AI community. [toggle title="To learn more about PSL, open this section" state="close"] Randomized Algorithm In this setting, we have a weighted rule. We have nonnegative weights and then a set of weighted logical rules in clausal form. Weighted Max SAT is a classical problem where we attempt to find the assignment to the random variables that maximize the weights of the satisfied rules. However, this problem is NP-HARD (which is a computational complexity theory for non-deterministic polynomial-time hardness). To overcome this, the randomized community converts this combinatorial optimization to a continuous optimization by introducing random variables which denote rounding probabilities. Probabilistic Graphic Models Graph models represent problems as a factor graph where we have random variables and rules that are essentially the potential function. However, this problem is also NP-Hard. We use Variational Inference approximation technique to solve this. Here we introduce marginal distributions (μ) for the variables. We can then express a solution if we can find a set of globally consistent assignment for these marginal distributions. The problem here is, although we can express it as a linear program, there is an exponential number of constraints. We will use techniques from the graphical model's community, particularly Local Consistency Relaxation to convert this to a simpler problem. The simple idea is to relax search over consistent marginals to simpler set. We introduce local pseudo marginals over joint potential states. Using KKT conditions we can optimize out the θ to derive simplified projected LCR over μ. This approach shows 16% improvement over canonical dual decomposition (MPLP) Soft Logic In the Soft Logic technique for convex optimizations, we have random variables that denote a degree of truth or similarity. We are essentially trying to minimize the amount of dissatisfaction in the rules. Hence with three different interpretations i.e. Randomized Algorithms, Graphical Models, and Soft Logic, we get the same convex optimizations. A PSL essentially takes a PSL program, takes some input data and defines a convex optimization. PSL is open-source. The code, data, tutorials are available online at psl.linqs.org MAP inference in PSL translates into convex optimization problem Inference is further enhanced with state-of-the-art optimization and distributed graph processing paradigms Learning methods for rule weights and latent variables Using PSL gives fast as well as accurate results on comparison with other approaches. [/toggle] [dropcap]3[/dropcap] Templates Templates build on patterns to solve problems in bigger areas such as computational social sciences, knowledge discovery, and responsible data science and machine learning. [toggle title="To learn about some use cases of PSL and Templates for pattern recognition, open this section." state="close"] Computational Social Sciences For exploring this area we will apply a PSL model to Debate stance classification. Let us consider a scenario of an online debate. The topic of the debate is climate change. We can use information in the text to figure out if the people participating in the debate are pro or anti the topic. We can also use information about the dialogue in the discourse. And we can build this on a PSL model. This is based on the collective classification problem we saw earlier in the post. We get a significant rise in accuracy by using a PSL program. Here are the results Knowledge Discovery Using a structure and making use of patterns in Knowledge discovery really pays off. Although we have Information extractors which can extract information from the web and other sources such as facts about entities, relationships, they are usually noisy. So it gets difficult to reason about them collectively to figure out which facts we actually wanna add to our knowledge base. We can add structure to the knowledge graph construction by Performing collective classification, link prediction, and entity resolution Enforcing ontological constraints Integrate knowledge source confidences Using PSL to make it scalable Here’s the PSL program for knowledge graph identification. These were evaluated on three real-world knowledge graphs. NELL, MusicBrainz, and Freebase. As shown in the above image, both statistical features and semantic constraints help but combining them always wins. Responsible Machine Learning Understanding structure can be key to mitigating negative effects and lead to responsible Machine Learning. The perils of ignoring structure in the machine learning space include overlooking Privacy. For instance, many approaches consider only individual’s attribute data. Some don't take into account what can be inferred from relational context. The other area is around Fairness. The structure here is often outside the data. It can be in the organization or the socio-economic structure. To enable fairness we need to implement impartial decision making without bias and need to take into account structural patterns. Algorithmic Discrimination is another area which can make use of a structure. The fundamental structural pattern here is a feedback loop. Having a way of encoding this feedback loop is important to eliminate algorithmic discrimination. [/toggle] Conclusion In this article, we saw ways of exploiting structures that can be tractable. It provided some tools and templates for exploiting structure. The keynote also provided opportunities for Machine Learning methods that can mix: Structured and unstructured approaches Probabilistic and logical inference Data-driven and knowledge-driven modeling AI and Machine Learning developers need to build on the approaches as described above and discover, exploit, and find new structure and create compelling commercial, scientific, and societal applications.
Read more
  • 0
  • 0
  • 2242