Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Artificial Intelligence

170 Articles
article-image-decoding-chief-robotics-officer-cro
Aaron Lazar
20 Dec 2017
7 min read
Save for later

Decoding the Chief Robotics Officer (CRO) Role

Aaron Lazar
20 Dec 2017
7 min read
The world is moving swiftly towards an automated culture and this means more and more machines will enter the fray. There’ve been umpteen debates on whether this is a good or bad thing - talks of how fortune tellers might be overtaken by Artificial Intelligence, etc. From the progress mankind has made, benefiting from these machines, we can safely say for now, that it’s only been a boon. With this explosion of moving and thinking metal, there’s a strong need for some governance at a top level. It now looks like we need to shift a bit to make more room at the C-level table, cos we’ve got the Master of the Machines arriving! Well “Master of the Machines” does sound cool, although not many companies would appreciate the “professionalism” of the term. The point is, the rise of a brand new C-level role, the Chief Robotics Officer, seems just on the horizon. Like we did in the Chief Data Officer article, we’re going to tell you more about this role and its accompanying responsibilities and by the end, you’ll be able to understand whether your organisation needs a CRO. Facts and Figures As far as I can remember, one of the first Chief Robotics Officers (CRO), was John Connor (#challengeme). Jokes apart, the role was introduced at the Chief Robotics Officer (CRO) Summit after having been spoken quite a lot in 2015. You’ve probably heard about this role by another name - Chief Autonomy Officer. Gartner predicts that 10% of large enterprises in supply-chain industries will have created a CRO position by 2020. Cisco states that as many as 60% of industries like logistics, healthcare, manufacturing, energy, etc,will have a CRO by 2025. The next generation of AI and Robots will affect workforce, business models, operations and competitive position of leading organisations. Therefore, it’s not surprising that the Boston Consulting Group projects that the market for robots will reach $67 billion by 2025. Why all the fuss? It’s quite evident that robots and smart machines will soon take over/redefine the way a lot of jobs are currently being performed by humans. This means that robots will be working alongside humans and as such there’s a need for the development of principles, processes, and disciplines that govern or manage this collaboration. According to Myria research, “The CROs (and their teams) will be at the forefront of technology, to translate technology fluency into clear business advantages, and to maintain Robotics and Intelligent Operational Systems (RIOS) capabilities that are intimately linked to customer-facing activities, and ultimately, to company performance”. With companies like Amazon, Adidas, Crowne Plaza Hotels and Walmart already deploying robots worth millions in research, to move in the direction of automation, there is clearly a need for a CRO. What might the Chief Robotics Officer’s responsibilities be? If you search for job listings of the role, you probably won’t succeed because the role is still in the making and there are no properly defined responsibilities. Although, if we were to think of what the CRO’s responsibilities might me, here’s what we could expect: Piece the Puzzle together: CROs will be responsible for bringing business functions like Engineering, HR, IT, and others together, implementing and maintaining automation technologies within the technical, social and financial contexts of a company. Manage the Robotics Life Cycle: The CRO would be responsible for defining and managing different aspects of the robotics life cycle. They will need to identify ways and means to improve the way robots function and boost productivity. Code of Conduct: CROs will need to design and develop the principles, processes and disciplines that will manage robots and smart machines, to enable them to collaborate seamlessly with a human workforce. Define Integration: CROs will define robotic environments and integration touch points with other business functions such as supply chain, manufacturing, agriculture, etc. Brand Guardians: With the addition of non-humans in the workforce, CROs will be responsible for the brand health and any violations caused by their robots. Define Management Techniques: CROs will bridge the gap between the machines and humans and will develop techniques that humans can use to manage robotic workers. On a broader level, these responsibilities look quite similar to those of a Chief Information Officer, Chief Digital Information Officer or even the Director of IT. Key CRO Skills Well, with the robots in place people management skills would be lesser required, or not. You might think that a CRO is expected to possess only technical skills because of the nature of the job. Although, they still will have to interact with humans and manage their collaboration with the machines. This brings in the challenge of managing change. Not everyone is comfortable working with machines and a certain amount of understanding and skill will need to be developed. With Brand Management and other strategic goals involved, the CRO must be on their toes moulding the technological side of the business to achieve short and long term goals. IT Managers, those in charge of automation and Directors who are skilled in Robotics, will be interested in scaling up to the position. On another note, there might be over 35% vacant robotics jobs by 2020, owing to the rapid growth of the field. Futuristic Challenges Some of the major challenges we expect to see could be managing change and an environment where humans and bots work together. The European Union has been thinking of considering robots as “electronic persons” with rights in the near future. This will result in complications about who is right and who is wrong. Moreover, there are plans about rewarding and penalising bots, based on their performance. How do you penalize a bot? Maybe penalising would come in the form of not charging the bot for a few days or formatting it’s memory, if it’s been naughty! Or rewards could be in the form of a software update or debugging it more frequently? These probably sound silly at the moment, but you never know what the future might have in store. The Million $ Question: Do we need a CRO? So far, there haven’t been any companies that have publicly announced about hiring a CRO, although many manufacturing companies already have senior roles related to robotics, such as Vice President of Advanced Automation and Robotics, or Vice President of Advanced Engineering. However, these roles are purely technical and not strategic. It’s clear that there needs to be someone at the high table calling the shots and strategies for a collaborative future, and world where robots and machines will work in harmony. Remy Glaisner of Myria Research predicts that the CROs will occupy a strategic role on a par with CIOs within the next five to eight years. CIOs might even get replaced by CROs in the long run. You never know, in the future the CRO might work with a bot themselves - the bot helping in taking decisions at an organisation/strategic level. The sky's the limit! In the end, small, medium or even a large sized businesses that are already planning to hire a CRO to drive automation, are on the right track. A careful evaluation of the benefits of having one in your organisation to lead your strategy, will help you decide on whether to take the CRO path or not. With automation bound to increase in importance in a coming years, it looks as though strategic representation will be inevitable for people with skills in the field.
Read more
  • 0
  • 0
  • 4039

article-image-behind-scenes-deep-learning-evolution-core-concepts
Shoaib Dabir
19 Dec 2017
6 min read
Save for later

Behind the scenes: Deep learning evolution and core concepts

Shoaib Dabir
19 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book will help you build and analyze various deep learning models and apply them to real-world problems.[/box] This article will take you through the history of Deep learning and how it has grown over time. It will walk you through some of the core concepts of Deep Learning like sigmoid activation, rectified linear unit(ReLU), etc. Evolution of deep learning A lot of the important work on neural networks happened in the 80's and in the 90's, but back then computers were slow and datasets very tiny. The research didn't really find many applications in the real world. As a result, in the first decade of the 21st century, neural networks have completely disappeared from the world of machine learning. It's only in the last few years, first seeing speech recognition around 2009, and then in computer vision around 2012, that neural networks made a big comeback with (LeNet, AlexNet). What changed? Lots of data (big data) and cheap, fast GPU's. Today, neural networks are everywhere. So, if you're doing anything with data, analytics, or prediction, deep learning is definitely something that you want to get familiar with. See the following figure: Deep learning is an exciting branch of machine learning that uses data, lots of data, to teach computers how to do things only humans were capable of before, such as recognizing what's in an image, what people are saying when they are talking on their phone, translating a document into another language, helping robots explore the world and interact with it. Deep learning has emerged as a central tool to solve perception problems and it's the state of the art with computer vision and speech recognition. Today many companies have made deep learning a central part of their machine learning toolkit—Facebook, Baidu, Amazon, Microsoft, and Google are all using deep learning in their products because deep learning shines wherever there is lots of data and complex problems to solve. Deep learning is the name we often use for "deep neural networks" composed of several layers. Each layer is made of nodes. The computation happens in the node, where it combines input data with a set of parameters or weights, that either amplify or dampen that input. These input-weight products are then summed and the sum is passed through activation function, to determine what extent the value should progress through the network to affect the final prediction, such as an act of classification. A layer consists of row of nodes that that turn on or off as the input is fed through the network based. The input of the first layer becomes the input of the second layer and so on. Here's a diagram of what neural network might look like: Let's get familiarize with some deep neural network concepts and terminology. Sigmoid activation Sigmoid activation function used in neural network has an output boundary of (0, 1), and α is the offset parameter to set the value at which the sigmoid evaluates to 0. Sigmoid function often works fine for gradient descent as long as input data x is kept within a limit. For large values of x, y is constant. Hence, the derivatives dy/dx (the gradient) equates to 0, which is often termed as the vanishing gradient problem. This is a problem because when the gradient is 0, multiplying it with the loss (actual value - predicted value) also gives us 0 and ultimately networks stops learning. Rectified Linear Unit (ReLU) A neural network can be built from combining some linear classifier with some non-linear function. The Rectified Linear Unit (ReLU) has become very popular in the last few years. It computes the function f(x) = max(0,x) f(x)=max(0,x). In other words, the activation is simply thresholded at zero. Unfortunately, ReLU units can be fragile during training and can die as a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again and so the gradient flowing through the unit will forever be zero from that point on. To overcome this problem a leaky ReLU function will have a small negative slope (of 0.01, or so) instead of zero when x<0: f(x)= (x<0)(αx)+ (x>=0)(x)f(x)=1(x<0)(αx)+1(x>=0)(x) where αα is a small constant. Exponential Linear Unit (ELU) The mean of ReLU activation is not zero and hence sometime makes the learning difficult for the network. Exponential Linear Unit (ELU) is similar to ReLU activation function when input x is positive, but for negative values it is a function bounded by a fixed value -1, for α=1 (hyperparameter α controls the value to which an ELU saturates for negative inputs). This behavior helps to push the mean activation of neurons closer to zero, that helps to learn representations that are more robust to noise. Stochastic Gradient Descent (SGD) Scaling batch gradient descent is cumbersome because it has to compute a lot if the dataset is big and as a rule of thumb. If computing your loss takes n floating point operations, computing its gradient takes about three times that compute. But in practice we want to be able to train lots of data because on real problems we will always get more gains the more data we use. And because gradient descent is iterative and have to do that for many steps. So, that means that in-order to update the parameters in a single step, it has to go through all the data samples and then doing this iteration over the data tens or hundreds of times. Instead of computing the loss over entire data samples for every step, we can compute the average loss for a very small random fraction of the training data. Think between 1 and 1000 training samples each time. This technique is called Stochastic Gradient Descent (SGD) and is at the core of deep learning. That's because SGD scales well with both data and model size. SGD gets its reputation for being black magic as it has lots of hyper-parameters to play and tune such as initialization parameters, learning rate parameters, decay, momentum, and you have to get them right. Deep Learning has emerged over time with its evolution from neural networks to machine learning. It is an intriguing segment of machine learning that uses huge amount of data, to teach computers how to do things that only humans were capable of. It highlights some of the key players who have adopted this concept at the very early stage that are Facebook, Baidu, Amazon, Microsoft, and Google. It shows the different concept layers through which deep learning is executed. If Deep Learning has got you hooked, wait till you learn what GANs are from the book Learning Generative Adversarial Networks.
Read more
  • 0
  • 0
  • 3684

article-image-6-ways-ai-transforming-web-development
Sugandha Lahoti
19 Dec 2017
8 min read
Save for later

6 ways you can give an Artificially Intelligent Makeover to the Web Development

Sugandha Lahoti
19 Dec 2017
8 min read
The web is an ever changing world! Users are always seeking personalized content and richer experiences - websites which can do whatever users want, however they want, and exactly as they want. In other words, end-users expect smarter applications with self-learning capabilities and hyper-customized user experiences. Now this poses a major challenge for developers - How do they design websites that deliver new and personalized content every time? Traditional approaches for web development are not the answer. They can, in fact, be problematic. Why you ask? Here are some clues. Building basic layouts and designing the website alone takes time. Forget customizing for dynamic content. Web app testing is a tedious, time-intensive process prone to errors. Even mundane web development decisions depend on the developer, slowing down the time taken to go live. Automating the web development process, starting with the more structured, repetitive and clearly defined tasks, can help developers pay less attention to cumbersome details and focus on the more value adding aspects of development such as formulating design strategy, planning the ultimate user experience and other such activities. Artificial Intelligence can help not just with this kind of intelligent automation, but also help do a lot more - from assisting with design conceptualization, website implementation to web analytics. This human-machine collaboration have the potential to transform the web as we know it. How AI can improve web development Through the lens of a typical software development lifecycle, let’s look at some ways in which AI is transforming web development. 1. Automating complex requirement gathering and analysis Using an AI powered chatbot or voice assistant, for instance, one can automate the process of collecting client requirements and end-user stories without human intervention. It can also prepare a detailed description of the gathered data and use data extraction tools to generate insights that then drive the web design and development strategy. This is possible through a carefully constructed system that employs NLP, ML, computer vision and image recognition algorithms and tools amongst others. Kore.ai is one such platform which empowers decision makers with insights they need to drive business outcomes within data-driven analytics. 2. Simplifying web designing with AI virtual assistants Designing basic layouts and templates of web pages is a tedious job for all developers. AI tools such as virtual assistants like The Grid’s Molly can help here by simplifying the design process. By prompting questions to the user (in this case the web owner or even the developer), and extracting content from their answers, AI assistants can create personalized content with the exact combination of branding, layout, design, and content required by that user. Take Adobe Sensei for instance - it can automatically analyze inputs and recommend design elements to the user. These range from automating basic photo editing skills such as cropping using image recognition techniques to creating elements in images which didn’t exist before by studying the neighbouring pixels. Developers now need only to focus on training a machine to think and perform like a designer. 3. Redefining web programming with self-learning algorithms AI can also facilitate programming. AI programs can perform basic tasks like updating and adding records to a database, predict which bits of code are most likely to be used to solve a problem, and then utilize those predictions to prompt developers to adopt a particular solution. An interesting example is Pix2Code that aims at automating front-end development. Infact, AI algorithms can also be utilized to create self-modifying codes right from scratch! Think of it as a fully-functional piece of code without human involvement. Developers can therefore build smarter apps and bots using AI tech at much faster rates than before. They would however need to train these machines and feed them with good datasets to start with. The smarter the design and more comprehensive the training, the better the results these systems produce. This is where the developers’ skills make a crucial difference.    4. Putting testing and quality assurance on auto-pilot AI algorithms can help an application test itself, with little or no human input. They can predict the key parameters of software testing processes based on historical data. They can also detect failure patterns and amplify failure prediction at a much higher efficiency than traditional QA approaches. Thus, bug identification and remedial will no longer be a long and slow process. As we speak, Microsoft is readying the release of an AI Bug Finder for developers, Microsoft Security Risk Detection. In this new AI powered QA environment, developers can discover more effective ways of testing, identify outliers faster, and work on effective code coverage techniques all with no or basic testing experience. In simple terms, developers can just focus on perfecting the build while the AI handles complex test cases and resultant bugs automatically. 5. Harnessing web analytics for SEO with AI SEO strategies rely heavily on number crunching. Many web analytics tools are good, however their potential is currently limited by the processing capabilities of humans who interpret that data for their websites. With AI backed data mining and analytics, one can maximise the usefulness of a website’s meta-data and other user generated data and metadata. The Predictive Engines built using AI technologies can generate insights which can point the developers to irregularities in their website architecture or highlight ill-fit content from an SEO point of view. Using such insights, AI can list out better ways to design websites and develop web content that connect with the target audience. Market Brew is one such Artificially Intelligent SEO platform which uses AI to help developers react and plan the content for their websites in ways that search engines might perceive them. 6. Providing superior end-user experience with chatbots AI powered chatbots can take customer support and interaction to the next level. A simple rule based chatbot responds to only specific preset commands. An AI infused chatbot, on the other hand, can simulate an actual conversation by learning something new from every conversation and tailoring answers and actions accordingly. They can automate routine tasks and provide relevant information and services. Imagine the possibilities here - these chatbots can enhance visitor engagement by responding to queries, to comments on blog posts, and provide real-time assistance and personalization. eBay’s AI-powered ShopBot is one such chatbot built on facebook messenger which can help consumers narrow down best deals from eBay from their entire listings and answer customer-centric queries. Skill up Developers! With the rise of AI, it is clear that developers will play a different role from the traditional role of a programmer-cum-designer. Developers will need to adapt their technical skillsets to rise above and complement the web development work that AI is capable of taking over. For example, they will now need to focus more on training AI algorithms to learn web and mobile usage patterns for better recommendations. A data-driven approach is now required with more focus on curating the data and taking the software through the process of learning by itself, and writing scripts to interact with the software. To do these, web developers need to get upto speed with the basics of machine learning, natural language processing, deep learning, etc and apply the tools and techniques related to AI into their web development workflow. The Future Perfect Artificial Intelligence has found its way into everything imaginable. Within web development this translates to automated web design, intelligent application development, highly proficient recommendation engines and many more. Today the use of AI in web development is a nice to have ammunition in an web developer’s toolkit. Soon, AI will make itself indispensable to the web development ecosystem ushering in the intelligent web. Developers will be a hybrid of designers, programmers and ML engineers who have a good grasp of user experience, are comfortable thinking in abstracts and algorithms and equally well versed with translating them into AI assisted elegant code . The next milestone for AI in web development is building self-improving apps which can think beyond the confines of human thought. Such apps would have the ability to perceive connections between data points that have not been previously considered, or are out of the reach of human intelligence. The ultimate goal of such machines, on the web and otherwise, would be to gain clairvoyance on aspects humans are missing or are oblivious to. Here’s hoping that when such a revolutionary AI hits the market, it impacts society for the greater good.
Read more
  • 0
  • 0
  • 3924
Banner background image

article-image-3-natural-language-processing-models-every-engineer-should-know
Amey Varangaonkar
18 Dec 2017
5 min read
Save for later

3 Natural Language Processing Models every ML Engineer should know

Amey Varangaonkar
18 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This interesting excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. The book gives an advanced view of different natural language processing techniques and their implementation in R. [/box] Our article given below aims to introduce to the concept of language models and their relevance to natural language processing. In terms of natural language processing, language models generate output strings that help to assess the likelihood of a bunch of strings to be a sentence in a specific language. If we discard the sequence of words in all sentences of a text corpus and basically treat it like a bag of words, then the efficiency of different language models can be estimated by how accurately a model restored the order of strings in sentences. Which sentence is more likely: I am learning text mining or I text mining learning am? Which word is more likely to follow “I am…”? Basically, a language model assigns the probability of a sentence being in a correct order. The probability is assigned over the sequence of terms by using conditional probability. Let us define a simple language modeling problem. Assume a bag of words contains words W1, W2,………………….,Wn. A language model can be defined to compute any of the following: Estimate the probability of a sentence S1: P (S1) = P (W1, W2, W3, W4, W5) Estimate the probability of the next word in a sentence or set of strings: P (W3|W2, W1) How to compute the probability? We will use chain law, by decomposing the sentence probability as a product of smaller string probabilities: P(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W1W2)P(W4|W1W2W3) N-gram models N-grams are important for a wide range of applications. They can be used to build simple language models. Let's consider a text T with W tokens. Let SW be a sliding window. If the sliding window consists of one cell (wi wi wi wi) then the collection of strings is called a unigram; if the sliding window consists of two cells, the output Is (w1 , w2)(w3 , w4)(w5 w5 , w5)(w1 , w2)(w3 , w4)(w5 , w5), this is called a bigram .Using conditional probability, we can define the probability of a word having seen the previous word; this is known as bigram probability. So the conditional probability of an element, given the previous element (wi -1), is: Extending the sliding window, we can generalize that n-gram probability as the conditional probability of an element given previous n-1 element: The most common bigrams in any corpus are not very interesting, such as on the, can be, in it, it is. In order to get more meaningful bigrams, we can run the corpus through a part-of-speech (POS) tagger. This would filter the bigrams to more content-related pairs such as infrastructure development, agricultural subsidies, banking rates; this can be one way of filtering less meaningful bigrams. A better way to approach this problem is to take into account collocations; a collocation is the string created when two or more words co-occur in a language more frequently. One way to do this over a corpus is pointwise mutual information (PMI). The concept behind PMI is for two words, A and B, we would like to know how much one word tells us about the other. For example, given an occurrence of A, a, and an occurrence of B, b, how much does their joint probability differ from the expected value of assuming that they are independent. This can be expressed as follows: Unigram model: Punigram(W1W2W3W4) = P(W1)P(W2)P(W3)P(W4) Bigram model: Pbu(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W2)P(W4|W3) P(w1w2…wn ) = P(wi | w1w2…wi"1) Applying the chain rule on n contexts can be difficult to estimate; Markov assumption is applied to handle such situations. Markov Assumption If predicting that a current string is independent of some word string in the past, we can drop that string to simplify the probability. Say the history consists of three words, Wi, Wi-1, Wi-2, instead of estimating the probability P(Wi+1) using P(Wi,i- 1,i-2) , we can directly apply P(Wi+1 | Wi, Wi-1). Hidden Markov Models Markov chains are used to study systems that are subject to random influences. Markov chains model systems that move from one state to another in steps governed by probabilities. The same set of outcomes in a sequence of trials is called states. Knowing the probabilities of states is called state distribution. The state distribution in which the system starts is the initial state distribution. The probability of going from one state to another is called transition probability. A Markov chain consists of a collection of states along with transition probabilities. The study of Markov chains is useful to understand the long-term behavior of a system. Each arc associates to certain probability value and all arcs coming out of each node must have exhibit a probability distribution. In simple terms, there is a probability associated to every transition in states: Hidden Markov models are non-deterministic Markov chains. They are an extension of Markov models in which output symbol is not the same as state. Language models are widely used in machine translation, spelling correction, speech recognition, text summarization, questionnaires, and many more real-world use-cases. If you would like to learn more about how to implement the above language models. check out our book Mastering Text Mining with R. This book will help you leverage the language models using popular packages in R for effective text mining.  
Read more
  • 0
  • 0
  • 8012

article-image-ux-designers-can-teach-machine-learning-engineers-start-model-interpretability
Sugandha Lahoti
18 Dec 2017
7 min read
Save for later

What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability

Sugandha Lahoti
18 Dec 2017
7 min read
Machine Learning is driving many major innovations happening around the world. But while complex algorithms drive some of the most exciting inventions, it's important to remember that these algorithms are always designed. This is why incorporating UX into machine learning engineering could offer a way to build even better machine learning systems that put users first. Why we need UX design in machine learning Machine learning systems can be complex. They require pre-trained data, and depend on a variety of variables to allow the algorithm to make 'decisions'. This means transparency can be difficult - and when things go wrong, it isn't easy to fix. Consider the ways that machine learning systems can be biased against certain people - that's down to problems in the training set and, subsequently, how the algorithm is learning. If machine learning engineers took a more user-centric approach to building machine learning systems - borrowing some core principles from UX design - they could begin to solve these problems and minimize the risk of algorithmic bias. After all, every machine learning model has an end user. Whether its for recommending products, financial forecasting, or driving a car, the model is always serving a purpose for someone. How UX designers can support machine learning engineers By and large, machine learning today is mostly focused on the availability of data and improving model performance by increasing their learning capabilities. However, in this a user-centric approach may be compromised. A tight interplay between UX design practices and machine learning is therefore highly essential to make ML discernible to all and to achieve model interpretability. UX Designers can contribute in a series of tasks that can improve algorithmic clarity. Most designers create a wireframe, which is a rough guide for the layout of a website or an app. The principles behind wireframing can be useful for machine learning engineers as they prototype their algorithms. It provides a space to make considerations about what's important from a user perspective. User testing is also useful in the context of machine learning. Just as UX designers perform user testing for applications, going through a similar process for machine learning systems makes sense. This is most clear in the way companies test driverless cars, but anywhere that machine learning systems require or necessitate human interaction should go through some period of user testing.  UX Design approach can help in building ML algorithms according to different contexts and different audiences. For example, we take a case of an emergency room in an hospital. Often the data required for building a decision support system for Emergency patient cases is quite sparse. Machine Learning can help in mining relevant datasets and divide them into subgroup of patients. UX Design here, can play the role of designing a particular part of the Decision Support system. UX professionals bring in a Human Centered Design to ML components. This means they also consider user perspective while integrating ML components. Machine Learning models generally tend to take the entire control from the user. For instance, in a driverless vehicle, the car determines the route, speed, and other decisions. Designers also include user controls so that they do not lose their voice in the automated system. Machine Learning developers, at times may unintentionally introduce implicit biases in the systems, which can have serious or negative side effects. A recent example of this was Microsoft’s Tay, a Twitter bot that started tweeting racist comments spending just a few hours on Twitter. UX Designers plan for these biases on a project by project level as well as on a larger level, advocating for a broad range of voices. They also keep an eye on the social impact of the ML systems by keeping a check on the input (as was the case with Microsoft Tay). This is done to ensure that an uncontrolled input does not lead to an unintended output. What are the benefits of bringing UX design into machine learning? All Machine Learning systems and practitioners can benefit from incorporating UX design practice as a standard. Some benefits of this collaboration are: Results generated from UX enabled ML algorithms will be transparent and easy to understand It helps end-users understand the product functioning and visualize the results better Better understanding of algorithm results builds user’s trust towards the system. This is important if the consequences of incorrect results are detrimental to the user. It helps data scientists better analyse the results of an algorithm to subsequently make better predictions. It aids in understanding the different components of model building: from designing, to development, to final deployment. UX designers focus on building transparent ML systems by defining the problem through a storyboard rather than on constraints placed by data and other aspects. They become aware of and catch biases ensuring an unbiased Machine learning system. All of this, ultimately results in better product development and improved user experience. How do companies leverage UX Design with ML Top-notch companies are looking at combining the benefits of UX design with Machine Learning to build systems which balance the back-end work (performance and usability) with the front-end (user-friendly outputs). Take Facebook for example. Their News Feed Ranking algorithm, an amalgamation of ML and UX design, works on two goals. The first is showing the right content at the right time, which involves Machine Learning capabilities. The other is enhancing user interaction by displaying posts more prominently so as to create more customer engagement and increase user dwelling time. Google’s UX community has combined UX Design with machine learning in an initiative known as—human-centered machine learning (HCML). In this project, UX designers work in sync with ML developers to help them create unique Machine Learning products catering to human understanding. ML developers are in turn taught how to integrate UX into ML algorithms for better user experience. Airbnb created an algorithm to dynamically alter and set prices for their customers units. However, on interacting with their customers, they found that users were hesitant of giving full control to the system. Hence the UX Design team altered the design, to add functionalities of minimum and maximum rent allowed. They also created a setting that allowed customers to set the general frequency of rentals. Thus, they approached the machine learning project with user experience keeping in mind. Salesforce has a Lightning Design System which includes a centralized design systems team of researchers, accessibility specialists, lead product designers, prototypers, and UX engineers. They work towards documenting visual systems and abstraction of design patterns to assist ML developers. Netflix has also plunged into this venture by offering their customers with personalized recommendations as well as personalized visuals. They have a personalized artwork or imagery to portray their titles. The artwork representing their titles is adjusted to capture the attention of a particular user. This, in turn, acts as a gateway into that title and gives users a visual perception as to why a TV show or a movie is good for them. Thus helping them achieve user engagement as well as user retention. The road ahead In future, we would see most organizations having a blend of UX Designers and data scientists in their teams to create user-friendly products. UX Designers would work closely with developers to find unique ways of incorporating design ethics and abilities in machine learning findings and predictions. This would lead to new and better job opportunities for both designers and developers with further expansion on their skill sets. In fact, it would give rise to a hybrid language, where algorithmic implementations will be consolidated with design to make ML frameworks simpler for the clients.
Read more
  • 0
  • 0
  • 4546

article-image-handpicked-weekend-reading-15th-dec-2017
Aarthi Kumaraswamy
16 Dec 2017
2 min read
Save for later

Handpicked for your Weekend Reading - 15th Dec, 2017

Aarthi Kumaraswamy
16 Dec 2017
2 min read
As you gear up for the holiday season and the year-end celebrations, make a resolution to spend a fraction of your weekends in self-reflection and in honing your skills for the coming year. Here is the best of the DataHub for your reading this weekend. Watch out for our year-end special edition in the last week of 2017! NIPS Special Coverage A deep dive into Deep Bayesian and Bayesian Deep Learning with Yee Whye Teh How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel For the complete coverage, visit here. Experts in Focus Ganapati Hegde and Kaushik Solanki, Qlik experts from Predoole Analytics on How Qlik Sense is driving self-service Business Intelligence 3 things you should know that happened this week Generative Adversarial Networks: Google open sources TensorFlow-GAN (TFGAN) “The future is quantum” — Are you excited to write your first quantum computing code using Microsoft’s Q#? “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains Try learning/exploring these tutorials weekend Implementing a simple Generative Adversarial Network (GANs) How Google’s MapReduce works and why it matters for Big Data projects How to write effective Stored Procedures in PostgreSQL How to build a cold-start friendly content-based recommender using Apache Spark SQL Do you agree with these insights/opinions Deep Learning is all set to revolutionize the music industry 5 reasons to learn Generative Adversarial Networks (GANs) in 2018 CapsNet: Are Capsule networks the antidote for CNNs kryptonite? How AI is transforming the manufacturing Industry
Read more
  • 0
  • 0
  • 2154
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-ai-is-transforming-the-manufacturing-industry
Kunal Parikh
15 Dec 2017
7 min read
Save for later

How AI is transforming the manufacturing Industry

Kunal Parikh
15 Dec 2017
7 min read
After more than five decades of incremental innovation, the manufacturing sector is finally ready to pivot thanks to Industry 4.0. Self-consciousness of technology plays a key role in ushering in Industry 4.0. AI in the manufacturing sector aims to achieve just that i.e. the creation of systems that can perceive their environment and take action consequently. One of the prominent minds in AI, Andrew Ng believes Factories to be AI's next frontier! Andrew is on a mission to AI-ify manufacturing with its new start-up called Landing.AI. For this initiative he partnered with Foxconn, world's largest contract manufacturer and makers of Apple iPhones. Together they aim to develop a wide range of AI transformation programs, from introduction of new technologies, operational processes, automated quality control and much more. In this AI-powered industrial revolution, machines are becoming smarter and interconnected. Manufacturers are using embedded intelligence of machines for collecting and analyzing data to generate meaningful insights. These are then used to run equipment efficiently, optimize workflows of operations and supply chains, among other things. Thus, AI is leaving an indelible impact across the manufacturing cycle. Further, a new wave of automation is transforming the role of human workforce wherein AI-driven robots are empowering production 24-hours a day. This is helping industrial environments gear up for the shift towards the smart factory environment. Below are some ways AI is revolutionizing manufacturing. Predictive analytics for increased production output Smart manufacturing systems are leveraging the power of predictive analysis and machine learning algorithms to enhance the production capacity. Predictive analytics derives its power from the data collected from the devices or sensors embedded in a manufacturer’s industrial equipments. These sensors become part of the IoT (Internet of Things) which collects and shares data with data scientists on the cloud. This setup is helping manufacturing industries to move from repair-and-replace to a predict-and-fix maintenance model. They do this by enabling these businesses to retrieve the right information at the right time to make the right decisions. For instance, in a pump manufacturing company, data scientists could collect, store and analyze sensor data based on machine attributes like heat, vibration, noise etc. This data can be stored in the cloud allowing for an array of analyses to be performed from understanding machine performance to predicting and monitoring disruption in processes and equipment remotely. Further, syncing up production schedules with parts availability can ensure enhanced production output. Enhancing product and service quality with machine and deep learning algorithms Manufacturers can deploy supervised/unsupervised ML, DL and reinforcement learning algorithms to monitor quality issues in the manufacturing process. For instance, researchers at Lappeenranta University of Technology in Finland have developed an innovative welding system for high-strength steel. They used unsupervised learning to allow the system to learn to mimic human’s ability to self-explore and self-correct. This welding system detects imperfections and self-corrects during the welding process using a new kind of sensor system controlled by a neural network program. Further, it also calculates other faults that may arise during the entire process. Visual inspection technology in an industrial environment identifies both functional and cosmetic defects. IBM has developed a new offering for manufacturing clients to automate visual quality inspections. Rooted in deep learning, a centralized ‘learning service’ collects images of all products - normal and abnormal. Next, it builds analytical models to recognize and classify different characteristics of machine parts and components into OK or NG. Characteristics that meet quality specifications are considered as OK while those that don't are classified as ‘NG’. Predictive maintenance for enhanced MRO (Maintenance, Repair and Overhaul) performance Manufacturing industries strive to achieve excellence throughout the production process. To ensure this, machinery embedded with sensors generate real-time performance and workload data. This helps in diagnosing faults and in predicting the need for equipment maintenance. For instance, a machine may break down due to lack of maintenance in the long run, incurring losses to business. With predictive maintenance, businesses can be better equipped to handle equipment malfunction by identifying significant causal factors like weather, temperature etc. Targeted predictive maintenance generates critical information such as which machine parts will need replacing and when. This helps in reducing equipment downtime, lowering maintenance costs and pre-emptively addressing aging equipment. Reinforcement Learning for managing warehouses Large warehouses face challenging times in streamlining space, managing inventories and reducing transit time. Manufacturing industries are employing reinforcement learning for efficient warehouse management. RL approach uses trial and error iterations within an environment to achieve a particular goal. Imagine what a breeze warehousing could be and the associated cost savings, if robots could pick up the right products from various lots and move them to the right destinations with great precision. Here, reinforcement learning based algorithms can improve the efficiency of such intelligent warehouses with multirobot systems by addressing task scheduling and path planning issues. Fanuc, a Tokyo based company, employs robots having reinforced learning ability to perform such tasks with great agility and precision. AI in supply chain management AI is helping manufacturers gain an in-depth understanding of the complex variables at play in the supply chain and in predicting future scenarios. To enable seamless insights generation, businesses are opting for more flexible and efficient cyber-physical systems. These intelligent systems are self-configurative and self-optimizing structures that can predict problems and minimize losses. Thus they help businesses to innovate rapidly by reducing the time to market, foresee uncertainties and deal with them promptly. Siemens, for example, is creating a self-organizing factory that aims to automate the entire supply chain by generating work orders using the demand and order information. Implication of AI in Industry 4.0 Industry 4.0 is the new way of manufacturing using automation, devices connected on the IoT, cloud and cognitive computing. It propagates the concept of the “smart factory” in which cyber-physical systems observe the physical processes of the factory and make discrete decisions accordingly. As AI finds its application in Industry 4.0, computers will merge together with robotics to automate and maximize the efficiency of the industrial processes. Powered by machine learning algorithms, the computer systems could control the robots with minimum human intervention. For instance, in a manufacturing setup, AI can work alongside systems like SCADA - to control industrial processes in an efficient manner. These systems can monitor, collect and process real-time data by directly interacting with devices such as sensors, pumps, motors etc. through human-machine interface (HMI) software. These machine-to-machine communication systems give new direction to the human-machine collaboration potential thus changing the way we see workforce management. Industry 4.0 will favor those who can build software, hardware, and firmware - those who can adapt and maintain new equipment and those who can design automation and robotics. Within Industry 4.0, augmented reality and virtual reality are other cutting edge production ready technologies that are making the idea of a smart factory a reality. The recent relaunch of Google Glass especially designed for the factory floor is worth a mention here. The Wi-Fi-enabled glasses allow factory workers, mechanics, and other technicians to view instructional videos, manuals, training videos etc., all in their line of sight. This helps in maintaining higher standards of work while ensuring safety with agility. In Conclusion Manufacturing industries are gearing themselves to harness AI along with IoT, AR/VR to create an agile manufacturing environment and to make smarter and real-time decisions. AI is helping realize the full potential of Industrial Internet of Things (IIoT) by applying machine learning, deep learning and other evolutionary algorithms to the sensor data. Human-machine collaboration is transforming the scenario at the fulfillment centers creating a win-win situation for both humans and robots. Robots employed at the fulfillment centers having motion sensors move on to the field of QR codes with precision and agility withouting running into each other creating a fascinating view. Imagine a real-life JARVIS from the movie Iron Man managing entire supply chains or factory spaces. The day is not far away when we can see a JARVIS like advanced virtual assistant uses sensors to collect real-time data, AI to process data,and blockchain to securely transmit the information while using AR to interact with us visually. It could take care of system and mechanical failures remotely while ceasing control of the factory for efficient energy management. Manufacturers could go save the world or unveil new products, Iron Man style!
Read more
  • 0
  • 0
  • 5363

article-image-challenges-training-gans-generative-adversarial-networks
Shoaib Dabir
14 Dec 2017
5 min read
Save for later

Why training a Generative Adversarial Network (GAN) Model is no piece of cake

Shoaib Dabir
14 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book gives a complete coverage of Generative adversarial networks. [/box] The article highlights some of the common challenges that a developer might face while using GAN models. Common challenges faced while working with GAN models Training a GAN is basically about two networks, generator G(z) and discriminator D(z) trying to race against each other and trying to reach an optimum, more specifically a nash equilibrium. The definition of nash equilibrium as per Wikipedia: (in economics and game theory) a stable state of a system involving the interaction of different participants, in which no participant can gain by a unilateral change of strategy if the strategies of the others remain unchanged. 1. Setting up failure and bad initialization If you think about it, this is exactly what a GAN is trying to do; the generator and discriminator reach a state where they cannot improve further given the other is kept unchanged. Now the setup of gradient descent is to take a step in a direction that reduces the loss measure defined on the problem—but we are by no means enforcing the networks to reach Nash equilibrium in GAN, which have non-convex objective with continuous high dimensional parameters. The networks try to take successive steps to minimize a non-convex objective and end up in an oscillating process rather than decreasing the underlying true objective. In most cases, when your discriminator attains a loss very close to zero, then right away you can figure out something is wrong with your model. But the biggest pain-point is figuring out what is wrong. Another practical thing done during the training of GAN is to purposefully make one of the networks stall or learn slower, so that the other network can catch up. And in most scenarios, it's the generator that lags behind so we usually let the discriminator wait. This might be fine to some extent, but remember that for the generator to get better, it requires a good discriminator and vice versa. Ideally the system would want both the networks to learn at a rate where both get better over time. The ideal minimum loss for the discriminator is close to 0.5— this is where the generated images are indistinguishable from the real images from the perspective of the discriminator. 2. Mode collapse One of the main failure modes with training a generative adversarial network is called mode collapse or sometimes the helvetica scenario. The basic idea is that the generator can accidentally start to produce several copies of exactly the same image, so the reason is related to the game theory setup we can think of the way that we train generative adversarial networks as first maximizing with respect to the discriminator and then minimizing with respect to the generator. If we fully maximize with respect to the discriminator before we start to minimize with respect to the generator everything works out just fine. But if we go the other way around and we minimize with respect to the generator and then maximize with respect to the discriminator, everything will actually break and the reason is that if we hold the discriminator constant it will describe a single region in space as being the point that is most likely to be real rather than fake and then the generator will choose to map all noise input values to that same most likely to be real point. 3. Problem with counting GANs can sometimes be far-sighted and fail to differentiate the number of particular objects that should occur at a location. As we can see, it gives more numbers of eyes in the head than originally present: 4. Problems with perspective GANs sometime are not capable of differentiating between front and back view and hence fail to adapt well with 3D objects while generating 2D representation from it as follows: 5. Problems with global structures GANs do not understand a holistic structure similar to problems with perspective. For example, in the bottom left image, it generates an image of a quadruple cow, that is, a cow standing on its hind legs and simultaneously on all four legs. That is definitely unrealistic and not possible in real life! It is very important when its comes to train GAN models towards the execution and there would be some common challenges that can come ahead. The major challenge that arises is the failure of the setup and also the one that is mainly faced in training GAN model is mode collapse or sometimes the helvetica scenario. It highlights some of the common problems like with counting, perspective or be global structure. The above listings are some of the major issues faced while training a GAN model. To read more on solutions with real world examples, you will need to check out this book Learning Generative Adversarial Networks.  
Read more
  • 0
  • 0
  • 9400

article-image-glancing-fintech-growth-story-powered-ml-ai-apis
Kartikey Pandey
14 Dec 2017
4 min read
Save for later

Glancing at the Fintech growth story - Powered by ML, AI & APIs

Kartikey Pandey
14 Dec 2017
4 min read
When MyBucks, a Luxembourg based Fintech firm, started scaling up their business in other countries. They faced a daunting challenge of reducing the timeline for processing credit requests from over a week’s time to just under few minutes. Any financial institution dealing with lending could very well relate to the nature of challenges associated with giving credit - checking credit history, tracking past fraudulent activities, and so on. This automatically makes the lending process tedious and time consuming. To add to this, MyBucks also aimed to make their entire lending process extremely simple and attractive to customers. MyBucks’ promise to its customers: No more visiting branches and seeking approvals. Simply login from your mobile phone and apply for a loan - we will handle the rest in a matter of minutes. Machine Learning has triggered a whole new segment in the Fintech industry- Automated Lending Platforms. MyBucks is one such player. Some other players in this field are OnDeck, Kabbage, and Lend up. What might appear transformational with Machine Learning in MyBucks’ case is just one of the many examples of how Machine Learning is empowering a large number of finance based companies to deliver disruptive products and services. So what makes Machine Learning so attractive to Fintech and how has Machine Learning fuel this entire industry’s phenomenal growth? Read on. Quicker and efficient credit approvals Long before Machine Learning was established in large industries unlike today, it was quite commonly used to solve fraud detection problems. This primarily involved building a self-learning model that used a training dataset to begin with and further expanding its learning based on incoming data. This way the system could distinguish a fraudulent activity from a non-fraudulent one. Modern day Machine Learning systems are no different. They use the very same predictive models that rely on segmentation algorithms and methods. Fintech companies are investing in big data analytics and machine learning algorithms to make credit approvals quicker and efficient. These systems are designed in such a way that they pull data from several sources online, develop a good understanding of transactional behaviours, purchasing patterns, and social media behavior and accordingly decide creditworthiness. Robust fraud prevention and error detection methods Machine Learning is empowering banking institutions and finance service providers to embrace artificial intelligence and combat what they fear the most-- fraudulent activities. Faster and accurate processing of transactions has always been the fundamental requirement in the finance industry. An increasing number of startups are now developing Machine Learning and Artificial Intelligence systems to combat the challenges around fraudulent transactions or even instances of incorrectly reported transactions. Billguard is one such company that uses big data analytics and makes sense of millions of consumers who report billing complaints. The AI system then builds its intelligence by using this crowd-sourced data and reports incorrect charges back to consumers thereby helping get their money back. Reinventing banking solutions with the powerful combination of APIs and Machine Learning Innovation is key to survival in the finance industry. The 2017 PwC global fintech report suggests that the incumbent finance players are worried about the advances in the Fintech industry that poses direct competition to banks. But the way ahead for banks definitely goes through Fintech that is evolving everyday. In addition to Machine Learning, ‘API’ is the other strong pillar driving innovation in Fintech. Developments in Machine Learning and AI are reinventing the traditional lending industry and APIs are acting as the bridge between classic banking problems and the future possibilities. Established banks are now taking the API (Application Programming Interface) route to tie up with innovative Fintech players in their endeavor to deliver modern solutions to customers. Fintech players are also able to reap the benefits of working with the old guard, banks, in a world where APIs have suddenly become the new common language. So what is this new equation all about? API solutions are helping bridge the gap between the old and the new - by helping collaborate in newer ways to solve traditional banking problems. This impact can be seen far and wide within this industry and Fintech as an industry isn’t just limited to lending tech and everyday banking alone. There are several verticals within the industry that now find increased impact of Machine Learning -payments, wealth management, capital markets, insurance, blockchain and now even chatbots for customer service to name a few. So where do you think this partnership is headed? Please leave your comments below and let us know.
Read more
  • 0
  • 0
  • 4948

article-image-capsnet-capsule-networks-convolutional-neural-networks-cnns
Savia Lobo
13 Dec 2017
5 min read
Save for later

CapsNet: Are Capsule networks the antidote for CNNs kryptonite?

Savia Lobo
13 Dec 2017
5 min read
Convolutional Neural networks (CNNs), are a group from the neural network family that has manifested in areas such as Image recognition, classification, etc. They are one of the popular neural network models present in nearly all of the image recognition tasks that provide state-of-the-art-results. However, these CNNs have drawbacks, which are to be discussed later in the article. In order to address the issue with CNNs, Geoffrey Hinton, popularly known as the Godfather of Deep Learning, recently proposed a research paper along with two other researchers, Sara Sabour and Nicholas Frosst. In this paper, they introduced CapsNet or Capsule Network--a neural network, based on multi-layer capsule system. Let’s explore the issue with CNNs and how CapsNet came as an advancement to it. What is the issue with CNNs? Convolutional Neural Network or CNNs are known to seamlessly handle image classification tasks. They are experts in learning at a granular level; where the lower layers detect edges and shape of an object, and the higher layers detect the image as a whole. However, CNNs perform poorly when an image possesses a slightly different orientation (rotation or a tilt), as it compares every image with the ones it learns during training. For instance, if an image of a face is to be detected, it checks for facial features such as nose, two eyes, mouth, eyebrows, etc; irrespective of the placement. This means CNNs may identify an incorrect face in cases where the placement of an eye and the nose is not as conventionally expected, for example in case of the profile view. So, the orientation and the spatial relationships between the objects within an image is not considered by a CNN. To make CNNs understand orientation and spatial relationships, they were trained profusely with images taken from all possible angles. Unfortunately, it resulted in excess amount of time required to train the model. Also, the performance of the CNNs did not improve largely. Pooling methods were also introduced at each layer within the CNN model for two reasons; first  to reduce the time invested in training, and second to bring out positional invariance within CNNs. It resulted in triggering false positives in an image, i.e., it detected the object within an image but did not check its orientation. Also it incorrectly declared it as a right image. Thus, positional invariance made the CNNs susceptible to minute changes in viewpoint. Instead of invariance, what CNNs require is equivariance-- a feature that makes CNNs adapt to change in rotation or proportion within an image. This equivariance feature is now possible via Capsule Network! The Solution: Capsule Network CapsNet or Capsule network is an encapsulation of nested neural network layers. Traditional neural network contains multiple layers whereas a capsule network contains multiple layers within a single capsule. CNNs go deeper in terms of height, whereas the capsule network deepens in terms of nesting or internal structure. Such a model is highly robust to geometric distortions and transformations, which are a result of non-ideal camera angles. Thus, it is able to exceptionally handle orientations, rotations and so on. CapsNet Architecture Source: https://arxiv.org/pdf/1710.09829.pdf Key Features: Layer based Squashing In a typical Convolutional Neural Network, the squashing function is added to each layer of the CNN model. A squashing function compresses the input to one of the ends of a small interval, introducing nonlinearity to the neural network and enables the network to be effective. Whereas, in a Capsule network, the squashing function is applied to the vector output of each capsule. Given below is a squashing function proposed by Hinton in his research paper. Squashing function Source: https://arxiv.org/pdf/1710.09829.pd Instead of applying non-linearity to each neuron, the squashing function applies squashing to a group of neurons i.e the capsule. To be more precise, it applies nonlinearity to the vector output of each capsule. The squashing function also tries to squash the vector output to zero if it is a small vector. If the vector is too long, the function tries to limit the output vector to 1. Dynamic Routing Dynamic routing algorithm in CapsNet replaces the scalar-output feature detectors of the CNN with the vector-output capsules. Also, the max pooling feature in CNNs, which led to positional invariance, is replaced with ‘routing by agreement’. The algorithm ensures that when they forward propagate the data, it goes to the next most relevant capsule in the layer above. Although dynamic routing adds an extra computational cost to the capsule network, it has been proved to be advantageous to the network by making it more scalable and adaptable. Training the Capsule Network The capsule network is trained using the MNIST. MNIST is a dataset which includes more than 60,000 handwritten digit images. It is used to test machine learning algorithms. The capsule model is trained for 50 epochs with a batch size of 128 parts, where each epoch is responsible for a complete run through the training dataset. A TensorFlow implementation of the CapsNet based on Hinton’s research paper is available in GitHub repository. Similarly, CapsNet can also be implemented using other deep learning frameworks such as Keras, PyTorch, MXNet, etc. CapsNet is a recent breakthrough in the field of Deep learning and have a promise to benefit organizations with accurate image recognition tasks. Also, implementations with CapsNet is slowly catching up and is expected to reach at par like CNNs. They have been trained on a very simplistic dataset i.e the MNIST. They will still require to prove themselves on various other datasets. However, as time advances and we see CapsNet being trained within different domains, it will be exciting to discern how it moulds itself as a faster and more efficient training technique for deep learning models.
Read more
  • 0
  • 0
  • 5606
article-image-5-reasons-learn-generative-adversarial-networks-gans
Savia Lobo
12 Dec 2017
5 min read
Save for later

5 reasons to learn Generative Adversarial Networks (GANs) in 2018

Savia Lobo
12 Dec 2017
5 min read
Generative Adversarial Networks (GANs) are a prominent branch of Machine learning research today. As deep neural networks require a lot of data to train on, they perform poorly if data provided is not sufficient. GANs can overcome this problem by generating new and real data, without using the tricks like data augmentation. As the application of GANs in the Machine learning industry is still at the infancy level, it is considered a highly desirable niche skill. Having an added hands-on experience raises the bar higher in the job market. It can fetch you a higher pay over your colleagues and can also be the feature that sets your resume stand apart. Source: Gartner's Hype Cycle 2017  GANs along with CNNs and RNNs are a part of the in demand deep neural network experience in the industry. Here are five reasons why you should learn GANs today and how Kuntal Ganguly’s book, Learning Generative Adversarial Networks help you do just that. Kuntal is a big data analytics engineer at Amazon Web Services. He has around 7 years of experience building large-scale, data-driven systems using big data frameworks and machine learning. He has designed, developed, and deployed several large-scale distributed applications, without any assistance. Kuntal is a seasoned author with a rich set of books ranging across the data science spectrum from machine learning, deep learning, to Generative Adversarial Networks, published under his belt.[/author] The book shows how to implement GANs in your machine learning models in a quick and easy format with plenty of real-world examples and hands-on tutorials. 1. Unsupervised Learning now a cakewalk with GANs A major challenge of unsupervised learning is the massive amount of unlabelled data one needed to work through as part of data preparation. In traditional neural networks, this labeling of data is both costly and time-consuming. A creative aspect of Deep learning is now possible using Generative Adversarial Networks. Here, the neural networks are capable of generating realistic images from the real-world datasets (such as MNIST and CIFAR). GANs provide an easy way to train the DL algorithms. This is done by slashing down the amount of data required to train the neural network models, that too, with no labeling of data required. This book uses a semi-supervised approach to solve the problem of unsupervised learning for classifying images. However, this could be easily leveraged into developer’s own problem domain. 2. GANs help you change a horse into a zebra using Image style transfer https://www.youtube.com/watch?v=9reHvktowLY Turning an apple into an orange is Magic!! GANs can do this magic, without casting a  spell. Transferring Image-to-Image style, where the styling of one image is applied to the other. What GANs can do is, they can perform image-to-image translations across various domains (such as changing apple to orange or horse to zebra) using Cycle Consistent Generative Network (Cycle GANs). Detailed examples of how to turn the image of an apple to an orange using TensorFlow, and how of turn an image of a horse into a zebra using a GAN model, are given in this book.  3. GANs inputs your text and outputs an image Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. In this book, Kuntal showcases the technique of stacking multiple generative networks to generate realistic images from textual information using StackGANs.Further, the book goes on to explain the coupling of two generative networks, to automatically discover relationships among various domains (a relationship between shoes and handbags or actor and actress) using DiscoGANs. 4. GANs + Transfer Learning = No more model generation from scratch Source: Learning Generative Adversarial Networks Data is the basis to train any Machine learning model, scarcity of which can lead to a poorly-trained model, which can have high chances of failure. Some real-life scenarios may not have sufficient data, hardware, or resources to train bigger networks in order to achieve the desired accuracy. So, is training from scratch a must-do for training the models? A well-known technique used in deep learning that adapts an existing trained model for a similar task to the task at hand is known as Transfer Learning. This book will showcase Transfer learning using some hands-on examples. Further, a combination of both Transfer learning and GANs, to generate high-resolution realistic images with facial datasets is explained. Thus, you will also understand how to create creating artistic hallucination on images beyond GAN. 5. GANs help you take Machine Learning models to Production Most Machine learning tutorials, video courses, and books, explain the training and the evaluation of the models. But how do we take this trained model to production and put it to use and make it available to customers? In this book, the author has taken an example, i.e. developing a facial correction system using an LFW dataset, to automatically correct corrupted images using your trained GAN model. This book also contains several techniques of deploying machine learning or deep learning models in production both on data centers and clouds with micro-service based containerized environments.You will also learn the way of running deep models in a serverless environment and with managed cloud services. This article just scratches the surface of what is possible with GANs and why learning it would change your thinking about deep neural networks. To know more grab your copy of Kuntal Ganguly’s book on GANs: Learning Generative Adversarial Networks.     .
Read more
  • 0
  • 0
  • 5950

article-image-deep-learning-set-revolutionize-music-industry
Sugandha Lahoti
11 Dec 2017
6 min read
Save for later

Deep Learning is all set to revolutionize the music industry

Sugandha Lahoti
11 Dec 2017
6 min read
Isn’t it spooky how Facebook can identify faces of your friends before you manually tag them? Have you been startled by Cortana, Siri or Google Assistant when they instantly recognize and act on your voice as you speak to these virtual assistants? Deep Learning is the driving force behind these uncanny yet innovative applications. The next thing that is all set to dive into deep learning is the music industry. Neural networks not only ease production and generation of songs, but also assist in music recommendation, transcription and classification. Here are some ways that deep learning will elevate music and the listening experience itself: Generating melodies with Neural Nets At the most basic level, a deep learning algorithm follows 3 simple steps for music generation: First, the neural net is trained with a sample data set of songs which are labelled. The labelling is done based on the emotions you want your song to convey (happy, sad, funny, etc). For training, the program converts the speech of the data set in text format and then creates vector for each word.  The training data can also be in the form of MIDI format which is a standard protocol for encoding musical notes. After completing the training, the program is fed with a set of emotions as input. It identifies the associated input vectors and compares them to training vectors. The output is a melody or chords that represent the desired emotions. Long short-term memory (LSTM) architectures are also used for music generation. They take structured input of a music notation. These inputs are then encoded as vectors and fed into an LSTM at each timestep. LSTM then predicts the encoding of the next timestep. Fully connected convolutional layers are utilized to increase the music quality and to represent rich features in the frequency domain. Magenta, the popular art and music project of Google has launched Performance RNN, which is an LSTM-based recurrent neural network. It is designed to produce multiple sounds with expressive timing and dynamics. In other words, Performance RNN determines which notes to play, when to play them, and how hard to strike each note. IBM’s Watson Beat uses a neural network to produce complete tracks by understanding music theory, structure, and emotional intent. According to Richard Daskas, a music composer working on the Watson Beat project, “Watson only needs about 20 seconds of musical inspiration to create a song.” Transcripting music with deep learning Deep learning methods can also be used for arranging a piece of music for a different instrument. LSTM networks are a popular choice for music transcription and modelling. These networks are trained using a large dataset of pre-labelled music transcriptions (expressed with ABC notation). These transcriptions are then used to generate new music transcriptions. In fact, transformed audio data can be used to predict the group of notes currently being played. This can be achieved by treating the transcription model as an image classification problem. For this, an image of an audio is used, called as Spectrogram. A spectrogram displays how the spectrum or frequency content changes over time. A Short Time Fourier Transform (STFT) or a constant Q transform is used to create this spectrogram. The spectrogram is then feeded to a Convolutional Neural network(CNN). The CNN estimates current notes from audio data and determines what specific notes are present by analysing 88 output nodes for each of the piano keys. This network is generally trained using large number of examples from MIDI files spanning several different genres of music. Magenta has developed The NSynth dataset, which is a high-quality multi-note dataset for music transcription. It is inspired by image recognition datasets and has a huge collection of annotated musical notes. Make better music recommendations Neural Nets are also used to make intelligent music recommendations and are a step ahead of the traditional Collaborative filtering networks. Using neural networks, the system can analyse the songs saved by the users, and then utilize those songs to make new recommendations.  Neural nets can also be used to analyze songs based on musical qualities such as pitch, chord progression, bass, etc. Using the similarities between songs having the same traits as each other, neural networks can detect and predict new songs.  Thus providing recommendation based on similar lyrical and musical styles. Convolutional neural networks (CNNs) are utilized for making music recommendations. A time-frequency representation of the audio signal is fed into the network as the input. 3 second audio clips are randomly chosen from the audio samples to train the neural network. The CNNs are then used to predict latent factors from music audio by taking the average of the predictions for consecutive clips. The feature extraction layers and pooling layers permits operation on several timescales. Spotify is working on a music recommendation system with a CNN. This recommendation system, when trained on short clips of songs, can create playlists based on the audio content only. Classifying music according to genre Classifying music according to a genre is another achievement of neural nets. At the heart of this application lies the LSTM network. At the very first stage, convolutional layers are used for feature extraction from the spectrograms of the audio file. The sequence of features so obtained is given as input to the LSTM layer. LSTM evaluates dependencies of the song across both short time period as well as long term structure. After the LSTM, the input is fed into a fully connected, time-distributed layer which essentially gives us a sequence of vectors. These vectors are then used to output the network's evaluation of the genre of the song at the particular point of time. Deepsound uses GTZAN dataset and an LSTM network to create a model for music genre recognition.  On comparing the mean output distribution with the correct genre, the model gives almost 67% of accuracy. For musical pattern extraction, MFCC feature dataset is used for audio analysis. First, the audio signal is extracted in the MFCC format. Next, the input song is modified into an MFCC map. This Map is then split to feed it as the input of the CNN. Supervised learning is used for automatically obtaining musical pattern extractors, considering the song label is provided. The extractors so acquired, are used for restoring high-order pattern-related features. After high-order classification, the result is combined and undergoes a voting process to produce the song-level label. Scientists from Queen Mary University of London trained a neural net with over 6000 songs in a ballad, hip-hop, and dance to develop a neural network that achieves almost 75% accuracy in song classification. The road ahead Neural networks have advanced the state of music to whole new level where one would no longer require physical instruments or vocals to compose music. The future would see more complex models and data representations to understand the underlying melodic structure. This would help models create compelling artistic content on their own. Combination of music with technology would also foster a collaborative community consisting of artists, coders and deep learning researchers, leading to a tech-driven, yet artistic future.  
Read more
  • 0
  • 0
  • 6094

article-image-handpicked-weekend-reading-8th-dec-2017
Aarthi Kumaraswamy
09 Dec 2017
2 min read
Save for later

Handpicked for your weekend Reading – 8th Dec 2017

Aarthi Kumaraswamy
09 Dec 2017
2 min read
While you were away attending NIPS 2017 this week, a lot has been happening around you in the data science and machine learning space. No worries! Here is a brief roundup of the best of what we published on the Datahub this week for your weekend reading. [box type="shadow" align="" class="" width=""]If you would like to share your insights and takeaways from NIPS with our readers on the DataHub, write to us at contributors@packtpub.com.[/box] NIPS 2017 Highlights - Part 1 3 great ways to leverage Structures for Machine Learning problems by Lise Getoor at NIPS 2017 Top Research papers showcased at NIPS 2017 – Part 2 Top Research papers showcased at NIPS 2017 – Part 1 Watch out for more in this area in the coming weeks. Expert in Focus Kate Crawford, Principal Researcher at Microsoft Research and a Distinguished Research Professor at New York University, on 20 lessons on bias in machine learning systems, Keynote at NIPS 2017 3 Things that happened this week in Data Science News PyTorch 0.3.0 releases, ending stochastic functions DeepVariant: Using Artificial Intelligence in Human Genome Sequencing Amazon unveils Sagemaker: An end-to-end machine learning service For a more comprehensive roundup of top news stories this week, check out our weekly news roundup post.   Get hands-on with these Tutorials Understanding Streaming Applications in Spark SQL Implementing Linear Regression Analysis with R What are Slowly Changing Dimensions (SCD) and why you need them in your Data Warehouse? Do you agree with these Insights & Opinions? 4 popular algorithms for Distance-based outlier detection Stitch Fix: Full Stack Data Science and other winning strategies Admiring the many faces of Facial Recognition with Deep Learning One Shot Learning: Solution to your low data problem
Read more
  • 0
  • 0
  • 2176
article-image-top-6-java-machine-learningdeep-learning-frameworks-cant-miss
Kartikey Pandey
08 Dec 2017
4 min read
Save for later

Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss

Kartikey Pandey
08 Dec 2017
4 min read
The data science tech market is buzzing with new and interesting Machine Learning libraries and tools almost everyday. In an increasingly growing market, it becomes difficult to choose the right tool or set of tools. More importantly, Artificial Intelligence and Deep Learning based projects require a different approach than traditional programming which makes things tricky to zero-in on one library or a framework. The choice of a framework is largely based upon the type of problem, one is expecting to solve. But there are other considerations too. Speed is one such factor that more or less would always play an important role in decision making. Other reasons could be how open-ended it is, architecture, functions, complexity of use, support for algorithms, and so on. Here, we present to you six Java libraries for your next Deep Learning and Artificial Intelligence project you shouldn’t miss if you are a Java loyalist or simply a web developer who wants to enter the world of deep learning. DeepLearning4j (DL4J) One of the first, commercial grade, and most popular deep learning frameworks developed in Java. It also supports other JVM languages (Java, Clojure, Scala). What’s interesting about the DL4J, is that it comes with an in-built GPU support for the training process. It also supports Hadoop YARN for distributed application management. It is popular for solving problems related to image recognition, fraud detection and NLP. MALLET Mallet (Machine Learning for Language Toolkit) is an open source Java Machine Learning toolkit. It supports NLP, clustering, modelling, and classification. The most important capability of Mallet is its support for a wide variety of algorithms such as Naive Bayes and Decision Trees. Another useful feature it has is topic modelling toolkit. Topic models are useful when analyzing large collections of unlabelled texts.   Massive Online Analysis (MOA) MOA is an open source data streaming and mining framework for real time analytics. It has a strong and growing community and is similar and related to Weka. It also has the ability to deal with massive data streams. Encog This framework supports a wide array of algorithms and neural networks such as Artificial Neural Network, Bayesian Network, Genetic Programming and algorithms. Neuroph Neuroph as the name suggests offers great simplicity when working on neural networks. The main USP of Neuroph is its incredibly useful GUI (Graphical User Interface) tool that helps in creating and training neural networks. Neuroph is a good choice of framework when you have a quick project on hand and you don’t want to spend hours learning the theory. Neuroph helps you quickly set up and running in putting neural networks to work for your project. Java Machine Learning Library The Java Machine Learning Library offers a great set of reference implementation of algorithms that you can’t miss for your next Machine Learning project. Some of the key highlights are support vector machines and clustering algorithms. These are a few key frameworks and tools you might want to consider when working on your next research work. The Java ML library ecosystem is vast with many tools and libraries to support, and we just touched the tip of that iceberg in this article. One particular tool that deserve an honourable mention is Environment for Developing KDD-Applications Supported by Index-Structure (ELKI). It is designed particularly with researchers and research students kept in mind. The main focus of ELKI is its broad coverage of data algorithms which makes it a natural fit for research work. What’s really important while choosing any of the above or tools outside of the list is a good understanding of the requirements and the problems you intend to solve. To reiterate, some of the key considerations to bear in mind before zeroing in on a tool would be - support for algorithms, implementation of neural networks, dataset size (small, medium, large), and speed.
Read more
  • 0
  • 0
  • 7476

article-image-admiring-many-faces-facial-recognition-deep-learning
Sugandha Lahoti
07 Dec 2017
7 min read
Save for later

Admiring the many faces of Facial Recognition with Deep Learning

Sugandha Lahoti
07 Dec 2017
7 min read
Facial recognition technology is not new. In fact, it has been around for more than a decade. However, with the recent rise in artificial intelligence and deep learning, facial technology has achieved new heights. In addition to facial detection, modern day facial recognition technology also recognizes faces with high accuracy and in unfavorable conditions. It can also recognize expressions and analyze faces to generate insights about an individual. Deep learning has enabled a power-packed face recognition system, all geared up to achieve widespread adoption. How has deep learning modernised facial recognition Traditional facial recognition algorithms would recognize images and people using distinct facial features (placement of eye, eye color, nose shape etc.) However, they failed in correct identification in cases of different lighting or slight change in the appearance ( beard growth, aging, or pose). In order to develop facial recognition techniques for a dynamic and ever-changing face, deep learning is proving to be a game changer. Deep Neural nets go beyond the approach of manual extraction. These AI based Neural Networks rely on image pixels to analyze features of a particular face. So they scan faces irrespective of the lighting, ageing, pose, or emotions. Deep learning algorithms remember each time they recognize or fail to recognize a problem. Thus, avoiding repeat mistakes and getting better at each attempt. Deep learning algorithms can also be helpful in converting 2D images to 3D. Facial recognition in practice: Facial Recognition Technology in Multimedia Deep learning enabled facial recognition technologies can be used to track audience reaction and measure different levels of emotions. Essentially it can predict how a member of the audience will react to the remaining film. Not only this, it also helps determine what percentage of users will be interested in a particular movie genre. For example, Microsoft’s Azure Emotion,  an emotion API detects emotions by analysing the facial expressions on an image or video content over time. Caltech and Disney have collaborated to develop a neural network which can track facial expressions. Their deep learning based Factorised Variational Autoencoders (FVAEs) analyze facial expressions of audience for about 10 minutes and then predict how their reaction will be for the rest of the film. These techniques help in estimating whether the viewers are giving the expected reactions at the right place. For example, the viewer is not expected to yawn on a comical scene. With this, Disney can also predict the earning potential of a particular movie. It can generate insights that may help producers create compelling movie trailers to maximize the number of footfalls. Smart TVs are also equipped with sophisticated cameras and deep learning algos for facial recognition ability. They can recognize the face of the person watching and automatically show channels and web applications programmed as their favorites. The British broadcasting corporation uses the facial recognition technology, built by CrowdEmotion. By tracking faces of almost 4,500 audience members watching show trailers, they gauge exact customer emotions about a particular programme. This in turn helps them generate insights to showcase successful commercials. Biometrics in Smartphones A large number of smartphones nowadays are instilled with biometric capabilities. Facial recognition in smartphones are not only used as a means of unlocking and authorizing, but also for making secure transactions and payments. In present times, there has been a rise in chips with built-in deep learning ability. These chips are embedded into smartphones. By having a neural net embedded inside the device, crucial face biometric data never leaves the device or sent to the cloud. This in turn improves privacy and reduces latency. Some of the real-world examples include Intel’s Nervana Neural Network Processor, Google’s TPU, Microsoft’s FPGA, and Nvidia’s Tesla V100. Deep learning models, embedded in a smartphone, can construct a mathematical model of the face which is then stored in the database. Using this mathematical face model, smartphones can easily recognize users even as their face ages or when it is obstructed by wearable accessories. Apple has recently launched the iPhone X facial recognition system termed as FaceID. It maps thousands of points on a user’s face using a projector and an infrared camera (which can operate under varied lighting conditions). This map is then passed to a bionic chip embedded in the smart phone. The chip has a neural network which constructs a mathematical model of the user’s face, used for biometric face verification and recognition. Windows Hello is also a facial recognition technology to unlock Windows smart devices equipped with infrared cameras. Qualcomm, a mobile technology organization, is working on a new depth-perception technology. It will include an image signal processor and high-resolution 3D depth-sensing cameras for facial recognition. Face recognition for Travel Facial recognition technologies can smoothen the departure process for a customer by eliminating the need for a boarding pass. A traveller is scanned by cameras installed at various check points, so they don’t have to produce a boarding pass at every step. Emirates is collaborating with Dubai Customs, Police and Airports to use a facial recognition technology solution integrated with the UAE Wallet app. The project is known as Together Initiative, it allows travellers to register and store their biometric facial data at several kiosks placed at the check-in area. This facility helps passengers to avoid presenting their physical documents at every touchpoint. Face recognition can also be used for determining illegal immigration. The technology compares the photos of passengers taken immediately before boarding, with the photos provided in their visa application. Biometric Exit, is an initiative by US government, which uses facial recognition to identify individuals leaving the country. Facial recognition technology can also be used at train stations to reduce the waiting time for  buying a train ticket or going through other security barriers. Bristol Robotics Laboratory has developed a software which uses infrared cameras to identify passengers as they walk onto the train platform. They do not need to carry tickets. Retail and shopping In the area of retail, smart facial recognition technologies can be helpful in fast checkout by keeping a track of each customer as they shop across a store. This smart technology, can also use machine learning and analytics to find trends in the shopper’s purchasing behavior over time and devise personalized recommendations. Facial video analytics and deep learning algorithms can also identify loyal and VIP shoppers from the moving crowd, giving them a privileged VIP experience. Thus, enabling them with more reasons to come back and make repeat purchases. Facial biometrics can also accumulate rich statistics about demographics(age, gender, shopping history) of an individual. Analyzing these statistics can generate insights, which helps organizations develop their products and marketing strategies. FindFace is one such platform that uses sophisticated deep learning technologies to generate meaningful data about the shopper. Its e-facial recognition system can verify faces with almost 99% accuracy. It can also help route the shopper data to a salesperson’s notice for personalized assistance. Facial recognition technology can also be used to make secure payment transactions simply by analysing a person’s face. AliBaba has set up a Smile to Pay face recognition system in KFC's. This system allows customers to make secure payments by merely scanning their face. Facial recognition has emerged as a hot topic of interest and is poised to grow. On the flip side, organizations deploying such technology should incorporate privacy policies as a standard measure. Data collected from such facial recognition software can also be used wrongly for targeting customers with ads, or for other illegal purposes. They should implement a methodical and systematic approach for using facial recognition for the benefit of their customers. This will not only help businesses generate a new source of revenue, but will also usher in a new era of judicial automation.  
Read more
  • 0
  • 0
  • 12262