Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-why-learn-machine-learning-as-a-non-techie
Natasha Mathur
11 Sep 2018
9 min read
Save for later

Why learn machine learning as a non-techie?

Natasha Mathur
11 Sep 2018
9 min read
“..what we want is a machine that can learn from experience..” ~Alan Turing, 1947 Thanks to artificial intelligence, Turing’s vision is coming true. Machines are learning, from others’ experience (using training datasets) and from their own as well.  Machines can now play chess, Go, and other games, they can help predict cancer, manage your day, summarize today’s news for you, edit your essays, identify your face, and even mimic dance moves and facial expressions. Come to think of it, every job role and career demands that you learn from experience, improve over time and explore new ways to do things.  Yes, machines are very effective at the former two, but humans still have an edge when it comes to innovative thinking. Imagine what you could achieve if you put together your mind with that of an efficient learning algorithm! You might think that artificial intelligence and machine learning are a dense and impenetrable field limited to research labs and textbooks. Does that mean only software engineers and researchers can dream of making it into this fascinating field? Not quite. We’ll unpick machine learning in the following sections and present our case for why it makes sense for everyone to understand this field better. Machine learning is, potentially, a first-class ticket to an exciting career, whether you are starting off fresh from college or are considering a career switch. Beyond the artificial intelligence and machine learning hype Artificial intelligence is simply an area of computing that solves complex real-world problems. Yes, research still happens in universities, and yes, data scientists are still exploring the limits of artificial intelligence in forward-thinking businesses, but it's much more than that. AI is so pervasive - and mysterious - that its applications hide in plain sight. Look around you carefully. From Netflix recommending personalized content to its 130 million viewers, to Youtube’s video search and automatic captions in videos, to Amazon’s shopping recommendations, to Instagram hashtags, Snapchat filters, spam filters on your Gmail and virtual assistants like Siri on our smartphones, artificial intelligence, and machine learning techniques are in action everywhere. This means as a user you are at some level already impacted by algorithms every day. The question then is should you be the person who’s career is limited by algorithms or the one whose career is propelled by algorithms. Why get into artificial intelligence development as a non-programmer? Artificial Intelligence is a perfect blend of knowledge, high salary, and some really great opportunities. Your non-programming field does not have to deter your growth in the AI field. In fact, your background can give you an edge over the traditional software developers and data scientists in terms of domain awareness and better understanding what the system should do, what it should look for, and make the users feel. Below are some reasons proving why you should make the jump in AI. Machine learning can help you be better at your current job How? You may ask. Take a news reporter or editor’s job for example. They must possess a blend of research/analysis centric capabilities, a creative set of skills and speed to come up with timely, quality articles on topics of interest to their readers. A data journalist or a writer with machine learning experience could quickly find great topics to write on with the help of machine learning based web scraping apps. Also, they could let the data lead them to unique stories that are emerging before traditional news reporters find their way to them. They could further also get a quick summary of multiple perspectives on a given topic using custom-built news feed algorithms. Then could they also find further research resources by tweaking their search parameters, even adding quality filters on top to only allow for high-quality citations. This kind of writer has cut down on the time they spent finding and understanding topics - which means more time to actually write compelling pieces and to connect with real sources for further insight. Algorithms can also find and correct language issues in writing now. This means editors can spend more time improving the content quality from a scope perspective. You can quickly start to see how artificial intelligence can complement the work you do and help you grow in your career. Yes, all this sounds lovely in theory, but is it really happening in practice? There are others like you who are successfully exploring machine learning Don’t believe me? Mason Fish, a software Engineer at Docker, Inc was earlier a musician. He had done his bachelor’s and masters from two different music conservatories. After graduating, he worked for five years as a professional musician. But, today he helps build and maintain services for Docker, a tool used by software engineers all over the world! This was just one case of a non-programmer diving into the computer science world. When musicians can learn to code and get core developer jobs in cutting-edge tech companies, it is not far fetched to say they can also learn to build machine learning models. Below are some examples of non-programmers of varied experience levels who are exploring the Machine Learning world. Per Harald Borgen, an economics graduate was able to boost the sales at his workplace Xeneta using machine learning algorithms, an accomplishment that helped accelerate his career. You can read his blog to see how he transformed from a machine learning newbie to a seasoned practitioner. Another example is a 14-year-old Tanmay Bakshi, who started a youtube channel at just 7 years of age where he teaches coding, algorithms, AI and machine learning concepts. Similarly, Sean Le Van created an AI chatbot when he was 14 years old using ML algorithms.   Rosebud Anwuri is another great example as she switched from chemical engineering to Data science. “My first exposure to Data Science was from a book that had nothing to do with Data Science,” writes Anwuri on her blog. She created her first Data Science learning path from an answer on Quora, last year. Fast forward to this year, she has been invited to speak at Stanford’s Women in Data Science Conference in Nigeria and has facilitated a workshop at The Women in Machine Learning and Data Science among others. She also writes on Machine Learning and Data Science on her blog.   Like Anwuri, Sce Pike dreamed of being an artist or singer in college and did her major in fine arts and anthropology. Pike went from art to web design to “human factors design,” which involves human-machine interactions, for the telecommunications giant Qualcomm. In addition to that, Pike started her own company IOTAS, that offers smart-home services to renters and homeowners. “I have had to approach my work with logic, research, and great design. Looking back, I’m amazed where I am now,” says Sce Pike. Read also: Data science for non-techies: How I got started (Part 1) Adapt or perish in the oncoming job automation wave of the fourth industrial revolution Ok, so maybe you’re happy with how you are growing anyway in your career. Be warned though, your job may not look the same even in the next few years. Automation is expected to replace up to 30% of jobs in the next 10 years, so upskilling to machine learning is a wise choice. Last month, Bank of England’s Chief Economist warned that 15 million jobs in Britain could be at stake because of artificial intelligence. Machine learning as a skill could help you stay relevant in the future and prepare for what’s being called, “the third machine age”. You can develop machine learning apps with no to minimal coding experience Thanks to great advancements by big tech companies and open source projects, machine learning today is accessible to people with varying degrees of programming experience - from new developers and even those who have never written a line of code in their life. So, whether you’re a curious web/UX designer, a news reporter, an artist, a school student, a filmmaker or an NGO worker, you will find good use of machine learning in your field. There are tools for machine learning for users with varying levels of experience. In fact, there are certain Machine Learning Applications that you can build even today. Some examples are Image and text classification with Neural Network, Facial recognition, Gaming bots, music generation, object detection, etc. Machine learning skills are highly rewarded Machine learning is a nascent field where demand far outweighs supply. According to research done by Indeed.com, the number one job requirement in AI is that of a Machine Learning Engineer, with data scientist jobs taking the second spot. In fact, AI researchers can earn more than 1 million dollar per year and the AI geniuses at Elon Musk’s OpenAI are a living proof for this. OpenAI paid its top AI researcher, Ilya Sutskever, more than  $1.9 million, back in 2016. Another leading researcher, Ian Goodfellow, in OpenAI was paid more than $800,000. Machine Learning is not hard to learn. It might seem intimidating at first, but once you get the basics right, the rest of the ML journey becomes easier. If you’re convinced that ML is for you, but are confused about how to get started then don’t worry, we’ve got you covered. To help you get started, here is a non-programmer’s guide to learning Machine Learning. So, yes, it doesn’t matter if you’re a non-programmer, musician, a librarian, or a student, the future is AI-driven so don’t be afraid to make that dive into Machine Learning. As Robert Frost said, “Two roads diverged in a wood, and I took the one less traveled by, And that has made all the difference”. 8 Machine learning best practices [Tutorial] Google introduces Machine Learning courses for AI beginners Top languages for Artificial Intelligence development
Read more
  • 0
  • 0
  • 5738

article-image-what-are-generative-adversarial-networks-gans-and-how-do-they-work
Richard Gall
11 Sep 2018
3 min read
Save for later

What are generative adversarial networks (GANs) and how do they work? [Video]

Richard Gall
11 Sep 2018
3 min read
Generative adversarial networks, or GANs, are a powerful type of neural network used for unsupervised machine learning. Made up of two competing models which run in competition with one another, GANs are able to capture and copy variations within a dataset. They’re great for image manipulation and generation, but they can also be deployed for tasks like understanding risk and recovery in healthcare and pharmacology. GANs are actually pretty new - they were first introduced by Ian Goodfellow in 2014. Goodfellow developed them to tackle some of the issues with similar neural networks, including the Boltzmann machine and autoencoders. Both the Boltzmann machine and autoencoders use the Markov Decision Chain which has a pretty high computational cost. This efficiency gives engineers significant gains - which you need if you’re working at the cutting edge of artificial intelligence. How do Generative Adversarial Networks work? Let's start with a simple analogy. You have a painting - say the Mona Lisa - and we have a master forger who wants to create a duplicate painting. The forger does this by learning how the original painter - Leonardo Da Vinci - produced the painting. Meanwhile, you have an investigator trying to capture the forger and ‘second guess’ the rules the forger is learning. To map this onto the architecture of a GAN, the forger is the generator network, which learns the distribution of classes while the investigator is the discriminator network, which learning the boundaries between those classes - the formal ‘shape’ of the dataset. Applications of GANs Generative adversarial networks are used for a number of different applications. One of the best examples is a Google Brain project back in 2016 - researchers used GANs to develop a method of encryption. This project used 3 neural networks - Alice, Bob, and Eve. Alice’s job was to send an encrypted message to Bob. Bob’s job was to decode that message, while Eve’s job was to intercept it. To begin with Alice’s messages were easily intercepted by Eve. However, thanks to Eve’s adversarial work, Alice began to develop its own encryption strategy - it took 15,000 runs for Alice to successfully encrypt a message that could be deciphered by Bob that Eve couldn’t intercept. Elsewhere, GANs are also being used in fields such as drug research. The neural networks can be trained on the existing drugs and suggest new synthetic chemical structures that improve on drugs that already exist. Generative adversarial networks: the cutting edge of artificial intelligence As we’ve seen, GANs offer some really exciting opportunities in artificial intelligence. There are two key advantages you need to remember: GANs solve the problem of generating data when you don’t have enough to begin with and they require no human supervision. This is crucial when you think about the cutting edge of artificial intelligence, both in terms of the efficiency of running the models, and the real-world data we want to use - which could be poor quality or have privacy and confidentiality issues, as much healthcare data does.
Read more
  • 0
  • 0
  • 7178

article-image-what-the-eu-copyright-directive-means-for-developers-and-what-you-can-do
Richard Gall
11 Sep 2018
6 min read
Save for later

What the EU Copyright Directive means for developers - and what you can do

Richard Gall
11 Sep 2018
6 min read
Tomorrow, on Wednesday 12 September, the European Parliament will vote on amendments to the EU Copyright Bill, first proposed back in September 2016. This bill could have a huge impact on open source, software engineering, and even the future of the internet. Back in July, MEPs voted down a digital copyright bill that was incredibly restrictive. It asserted the rights of large media organizations to tightly control links to their stories, copyright filters on user generated content. https://twitter.com/EFF/status/1014815462155153408 The vote tomorrow is an opportunity to amend aspects of the directive - that means many of the elements that were rejected in July could still find their way through. What parts of the EU copyright directive are most important for software developers? There are some positive aspects of the directive. To a certain extent, it could be seen as evidence of the European Union continuing a broader project to protect citizens by updating digital legislation - a move that GDPR began back in May 2018. However, there are many unintended consequences of the legislation. It's unclear whether the negative impact is down to any level of malicious intent from law makers, or is simply reflective of a significant level of ignorance about how the web and software works. There are 3 articles within the directive that developers need to pay particular attention to. Article 13 of the EU copyright directive: copyright filters Article 13 of the directive has perhaps had the most attention. Essentially, it will require "information society service providers" - user-generated information and content platforms - to use "recognition technologies" to protect against copyright infringement. This could have a severe impact on sites like GitHub, and by extension, the very philosophy of open collaboration and sharing on which they're built. It's for this reason that GitHub has played a big part in educating Brussels law makers about the possible consequences of the legislation. Last week, the platform hosted an event to discuss what can be done about tomorrow's vote. In it, Marten Mickos, CEO of cybersecurity company Hacker One gave a keynote speech, saying that "Article 13 is just crap. It will benefit nobody but the richest, the wealthiest, the biggest - those that can spend tens of millions or hundreds of millions on building some amazing filters that will somehow know whether something is copyrighted or not." https://youtu.be/Sm_p3sf9kq4 A number MEPs in Brussels have, fortunately, proposed changes that would exclude software development platforms to instead focus the legislation on sites where users upload music and video. However, for those that believe strongly in an open internet, even these amendments could be a small compromise that not only places an unnecessary burden on small sites that simply couldn't build functional copyright filters, but also opens a door to censorship online. A better alternative could be to ditch copyright filters and instead opt for licensing agreements instead. This is something put forward by German politician Julia Reda - if you're interested in policy amendments you can read them in detail here. [caption id="attachment_22485" align="alignright" width="300"] Image via commons.wikimedia.org[/caption] Julia Reda is a member of the Pirate Party in Germany - she's a vocal advocate of internet freedoms and an important voice in the fight against many of the directive (she wants the directive to be dropped in its entirety). She's put together a complete list of amendments and alternatives here. Article 11 of the EU Copyright Directive: the "link tax" Article 11 follows the same spirit of article 13 of the bill. It gives large press organizations more control over how their content is shared and linked to online. It has been called the "link tax" - it could mean that you would need a license to link to content. According to news sites, this law would allow them to charge internet giants like Facebook and Google that link to their content. As Cory Doctorow points out in an article written for Motherboard in June, only smaller platforms would lose out - the likes of Facebook and Google could easily manage the cost. But there are other problems with article 11. It could, not only, as Doctorow also writes, "crush scholarly and encyclopedic projects like Wikipedia that only publish material that can be freely shared," but it could also "inhibit political discussions". This is because the 'link tax' will essentially allow large media organizations to fully control how and where their content is shared. "Links are facts" Doctorow argues, meaning that links are a vital component within public discourse, which allows the public to know who thinks what, and who said what. Article 3 of the EU Copyright Directive: restrictions on data mining Article 3 of the directive hasn't received as much attention as the two above, but it does nevertheless have important implications for the data mining and analytics landscape. Essentially, this proportion of the directive was originally aimed at posing restrictions on the data that can be mined for insights except in specific cases of scientific research. This was rejected by MEPs. However, it is still an area of fierce debate. Those that oppose it argue that restrictions on text and data mining could seriously hamper innovation and hold back many startups for whom data is central to the way they operate. However, given the relative success of GDPR in restoring some level of integrity to data (from a citizen's perspective), there are aspects of this article that might be worth building on as a basis for a compromise. With trust in a tech world at an all time low, this could be a stepping stone to a more transparent and harmonious digital domain. An open internet is worth fighting for - we all depend on it The difficulty unpicking the directive is that it's not immediately clear who its defending. On the one hand, EU legislators will see this as something that defends citizens from everything that they think is wrong with the digital world (and, let's be honest, there are things that are wrong with it). Equally, those organizations lobbying for the change will, as already mentioned, want to present this as a chance to knock back tech corporations that have had it easy for too long. Ultimately, though, the intention doesn't really matter. What really matters are the consequences of this legislation, which could well be catastrophic. The important thing is that the conversation isn't owned by well-intentioned law makers that don't really understand what's at stake, or media conglomerates with their own interests in protecting their content from the perceived 'excesses' of a digital world whose creativity is mistaken for hostility. If you're an EU citizen, get in touch with your MEP today. Visit saveyourinternet.eu to help the campaign. Read next German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law
Read more
  • 0
  • 0
  • 3815

article-image-most-commonly-used-java-machine-learning-libraries
Fatema Patrawala
10 Sep 2018
15 min read
Save for later

6 most commonly used Java Machine learning libraries

Fatema Patrawala
10 Sep 2018
15 min read
There are over 70 Java-based open source machine learning projects listed on the MLOSS.org website and probably many more unlisted projects live at university servers, GitHub, or Bitbucket. In this article, we will review the major machine learning libraries and platforms in Java, the kind of problems they can solve, the algorithms they support, and the kind of data they can work with. This article is an excerpt taken from Machine learning in Java, written by Bostjan Kaluza and published by Packt Publishing Ltd. Weka Weka, which is short for Waikato Environment for Knowledge Analysis, is a machine learning library developed at the University of Waikato, New Zealand, and is probably the most well-known Java library. It is a general-purpose library that is able to solve a wide variety of machine learning tasks, such as classification, regression, and clustering. It features a rich graphical user interface, command-line interface, and Java API. You can check out Weka at http://www.cs.waikato.ac.nz/ml/weka/. At the time of writing this book, Weka contains 267 algorithms in total: data pre-processing (82), attribute selection (33), classification and regression (133), clustering (12), and association rules mining (7). Graphical interfaces are well-suited for exploring your data, while Java API allows you to develop new machine learning schemes and use the algorithms in your applications. Weka is distributed under GNU General Public License (GNU GPL), which means that you can copy, distribute, and modify it as long as you track changes in source files and keep it under GNU GPL. You can even distribute it commercially, but you must disclose the source code or obtain a commercial license. In addition to several supported file formats, Weka features its own default data format, ARFF, to describe data by attribute-data pairs. It consists of two parts. The first part contains header, which specifies all the attributes (that is, features) and their type; for instance, nominal, numeric, date, and string. The second part contains data, where each line corresponds to an instance. The last attribute in the header is implicitly considered as the target variable, missing data are marked with a question mark. For example, the Bob instance written in an ARFF file format would be as follows: @RELATION person_dataset @ATTRIBUTE `Name`  STRING @ATTRIBUTE `Height`  NUMERIC @ATTRIBUTE `Eye color`{blue, brown, green} @ATTRIBUTE `Hobbies`  STRING @DATA 'Bob', 185.0, blue, 'climbing, sky diving' 'Anna', 163.0, brown, 'reading' 'Jane', 168.0, ?, ? The file consists of three sections. The first section starts with the @RELATION <String> keyword, specifying the dataset name. The next section starts with the @ATTRIBUTE keyword, followed by the attribute name and type. The available types are STRING, NUMERIC, DATE, and a set of categorical values. The last attribute is implicitly assumed to be the target variable that we want to predict. The last section starts with the @DATA keyword, followed by one instance per line. Instance values are separated by comma and must follow the same order as attributes in the second section. Weka's Java API is organized in the following top-level packages: weka.associations: These are data structures and algorithms for association rules learning, including Apriori, predictive apriori, FilteredAssociator, FP-Growth, Generalized Sequential Patterns (GSP), Hotspot, and Tertius. weka.classifiers: These are supervised learning algorithms, evaluators, and data structures. Thepackage is further split into the following components: weka.classifiers.bayes: This implements Bayesian methods, including naive Bayes, Bayes net, Bayesian logistic regression, and so on weka.classifiers.evaluation: These are supervised evaluation algorithms for nominal and numerical prediction, such as evaluation statistics, confusion matrix, ROC curve, and so on weka.classifiers.functions: These are regression algorithms, including linear regression, isotonic regression, Gaussian processes, support vector machine, multilayer perceptron, voted perceptron, and others weka.classifiers.lazy: These are instance-based algorithms such as k-nearest neighbors, K*, and lazy Bayesian rules weka.classifiers.meta: These are supervised learning meta-algorithms, including AdaBoost, bagging, additive regression, random committee, and so on weka.classifiers.mi: These are multiple-instance learning algorithms, such as citation k-nn, diverse density, MI AdaBoost, and others weka.classifiers.rules: These are decision tables and decision rules based on the separate-and-conquer approach, Ripper, Part, Prism, and so on weka.classifiers.trees: These are various decision trees algorithms, including ID3, C4.5, M5, functional tree, logistic tree, random forest, and so on weka.clusterers: These are clustering algorithms, including k-means, Clope, Cobweb, DBSCAN hierarchical clustering, and farthest. weka.core: These are various utility classes, data presentations, configuration files, and so on. weka.datagenerators: These are data generators for classification, regression, and clustering algorithms. weka.estimators: These are various data distribution estimators for discrete/nominal domains, conditional probability estimations, and so on. weka.experiment: These are a set of classes supporting necessary configuration, datasets, model setups, and statistics to run experiments. weka.filters: These are attribute-based and instance-based selection algorithms for both supervised and unsupervised data preprocessing. weka.gui: These are graphical interface implementing explorer, experimenter, and knowledge flowapplications. Explorer allows you to investigate dataset, algorithms, as well as their parameters, and visualize dataset with scatter plots and other visualizations. Experimenter is used to design batches of experiment, but it can only be used for classification and regression problems. Knowledge flows implements a visual drag-and-drop user interface to build data flows, for example, load data, apply filter, build classifier, and evaluate. Java-ML for machine learning Java machine learning library, or Java-ML, is a collection of machine learning algorithms with a common interface for algorithms of the same type. It only features Java API, therefore, it is primarily aimed at software engineers and programmers. Java-ML contains algorithms for data preprocessing, feature selection, classification, and clustering. In addition, it features several Weka bridges to access Weka's algorithms directly through the Java-ML API. It can be downloaded from http://java-ml.sourceforge.net; where, the latest release was in 2012 (at the time of writing this book). Java-ML is also a general-purpose machine learning library. Compared to Weka, it offers more consistent interfaces and implementations of recent algorithms that are not present in other packages, such as an extensive set of state-of-the-art similarity measures and feature-selection techniques, for example, dynamic time warping, random forest attribute evaluation, and so on. Java-ML is also available under the GNU GPL license. Java-ML supports any type of file as long as it contains one data sample per line and the features are separated by a symbol such as comma, semi-colon, and tab. The library is organized around the following top-level packages: net.sf.javaml.classification: These are classification algorithms, including naive Bayes, random forests, bagging, self-organizing maps, k-nearest neighbors, and so on net.sf.javaml.clustering: These are clustering algorithms such as k-means, self-organizing maps, spatial clustering, Cobweb, AQBC, and others net.sf.javaml.core: These are classes representing instances and datasets net.sf.javaml.distance: These are algorithms that measure instance distance and similarity, for example, Chebyshev distance, cosine distance/similarity, Euclidian distance, Jaccard distance/similarity, Mahalanobis distance, Manhattan distance, Minkowski distance, Pearson correlation coefficient, Spearman's footrule distance, dynamic time wrapping (DTW), and so on net.sf.javaml.featureselection: These are algorithms for feature evaluation, scoring, selection, and ranking, for instance, gain ratio, ReliefF, Kullback-Liebler divergence, symmetrical uncertainty, and so on net.sf.javaml.filter: These are methods for manipulating instances by filtering, removing attributes, setting classes or attribute values, and so on net.sf.javaml.matrix: This implements in-memory or file-based array net.sf.javaml.sampling: This implements sampling algorithms to select a subset of dataset net.sf.javaml.tools: These are utility methods on dataset, instance manipulation, serialization, Weka API interface, and so on net.sf.javaml.utils: These are utility methods for algorithms, for example, statistics, math methods, contingency tables, and others Apache Mahout The Apache Mahout project aims to build a scalable machine learning library. It is built atop scalable, distributed architectures, such as Hadoop, using the MapReduce paradigm, which is an approach for processing and generating large datasets with a parallel, distributed algorithm using a cluster of servers. Mahout features console interface and Java API to scalable algorithms for clustering, classification, and collaborative filtering. It is able to solve three business problems: item recommendation, for example, recommending items such as people who liked this movie also liked…; clustering, for example, of text documents into groups of topically-related documents; and classification, for example, learning which topic to assign to an unlabeled document. Mahout is distributed under a commercially-friendly Apache License, which means that you can use it as long as you keep the Apache license included and display it in your program's copyright notice. Mahout features the following libraries: org.apache.mahout.cf.taste: These are collaborative filtering algorithms based on user-based and item-based collaborative filtering and matrix factorization with ALS org.apache.mahout.classifier: These are in-memory and distributed implementations, includinglogistic regression, naive Bayes, random forest, hidden Markov models (HMM), and multilayer perceptron org.apache.mahout.clustering: These are clustering algorithms such as canopy clustering, k-means, fuzzy k-means, streaming k-means, and spectral clustering org.apache.mahout.common: These are utility methods for algorithms, including distances, MapReduce operations, iterators, and so on org.apache.mahout.driver: This implements a general-purpose driver to run main methods of other classes org.apache.mahout.ep: This is the evolutionary optimization using the recorded-step mutation org.apache.mahout.math: These are various math utility methods and implementations in Hadoop org.apache.mahout.vectorizer: These are classes for data presentation, manipulation, andMapReduce jobs Apache Spark Apache Spark, or simply Spark, is a platform for large-scale data processing builds atop Hadoop, but, in contrast to Mahout, it is not tied to the MapReduce paradigm. Instead, it uses in-memory caches to extract a working set of data, process it, and repeat the query. This is reported to be up to ten times as fast as a Mahout implementation that works directly with disk-stored data. It can be grabbed from https://spark.apache.org. There are many modules built atop Spark, for instance, GraphX for graph processing, Spark Streaming for processing real-time data streams, and MLlib for machine learning library featuring classification, regression, collaborative filtering, clustering, dimensionality reduction, and optimization. Spark's MLlib can use a Hadoop-based data source, for example, Hadoop Distributed File System (HDFS) or HBase, as well as local files. The supported data types include the following: Local vector is stored on a single machine. Dense vectors are presented as an array of double-typed values, for example, (2.0, 0.0, 1.0, 0.0); while sparse vector is presented by the size of the vector, an array of indices, and an array of values, for example, [4, (0, 2), (2.0, 1.0)]. Labeled point is used for supervised learning algorithms and consists of a local vector labeled with a double-typed class values. Label can be class index, binary outcome, or a list of multiple class indices (multiclass classification). For example, a labeled dense vector is presented as [1.0, (2.0, 0.0, 1.0, 0.0)]. Local matrix stores a dense matrix on a single machine. It is defined by matrix dimensions and a single double-array arranged in a column-major order. Distributed matrix operates on data stored in Spark's Resilient Distributed Dataset (RDD), which represents a collection of elements that can be operated on in parallel. There are three presentations: row matrix, where each row is a local vector that can be stored on a single machine, row indices are meaningless; and indexed row matrix, which is similar to row matrix, but the row indices are meaningful, that is, rows can be identified and joins can be executed; and coordinate matrix, which is used when a row cannot be stored on a single machine and the matrix is very sparse. Spark's MLlib API library provides interfaces to various learning algorithms and utilities as outlined in the following list: org.apache.spark.mllib.classification: These are binary and multiclass classification algorithms, including linear SVMs, logistic regression, decision trees, and naive Bayes org.apache.spark.mllib.clustering: These are k-means clustering org.apache.spark.mllib.linalg: These are data presentations, including dense vectors, sparse vectors, and matrices org.apache.spark.mllib.optimization: These are the various optimization algorithms used as low-level primitives in MLlib, including gradient descent, stochastic gradient descent, update schemes for distributed SGD, and limited-memory BFGS org.apache.spark.mllib.recommendation: These are model-based collaborative filtering implemented with alternating least squares matrix factorization org.apache.spark.mllib.regression: These are regression learning algorithms, such as linear least squares, decision trees, Lasso, and Ridge regression org.apache.spark.mllib.stat: These are statistical functions for samples in sparse or dense vector format to compute the mean, variance, minimum, maximum, counts, and nonzero counts org.apache.spark.mllib.tree: This implements classification and regression decision tree-learning algorithms org.apache.spark.mllib.util: These are a collection of methods to load, save, preprocess, generate, and validate the data Deeplearning4j Deeplearning4j, or DL4J, is a deep-learning library written in Java. It features a distributed as well as a single-machinedeep-learning framework that includes and supports various neural network structures such as feedforward neural networks, RBM, convolutional neural nets, deep belief networks, autoencoders, and others. DL4J can solve distinct problems, such as identifying faces, voices, spam or e-commerce fraud. Deeplearning4j is also distributed under Apache 2.0 license and can be downloaded from http://deeplearning4j.org. The library is organized as follows: org.deeplearning4j.base: These are loading classes org.deeplearning4j.berkeley: These are math utility methods org.deeplearning4j.clustering: This is the implementation of k-means clustering org.deeplearning4j.datasets: This is dataset manipulation, including import, creation, iterating, and so on org.deeplearning4j.distributions: These are utility methods for distributions org.deeplearning4j.eval: These are evaluation classes, including the confusion matrix org.deeplearning4j.exceptions: This implements exception handlers org.deeplearning4j.models: These are supervised learning algorithms, including deep belief network, stacked autoencoder, stacked denoising autoencoder, and RBM org.deeplearning4j.nn: These are the implementation of components and algorithms based on neural networks, such as neural network, multi-layer network, convolutional multi-layer network, and so on org.deeplearning4j.optimize: These are neural net optimization algorithms, including back propagation, multi-layer optimization, output layer optimization, and so on org.deeplearning4j.plot: These are various methods for rendering data org.deeplearning4j.rng: This is a random data generator org.deeplearning4j.util: These are helper and utility methods MALLET Machine Learning for Language Toolkit (MALLET), is a large library of natural language processing algorithms and utilities. It can be used in a variety of tasks such as document classification, document clustering, information extraction, and topic modeling. It features command-line interface as well as Java API for several algorithms such as naive Bayes, HMM, Latent Dirichlet topic models, logistic regression, and conditional random fields. MALLET is available under Common Public License 1.0, which means that you can even use it in commercial applications. It can be downloaded from http://mallet.cs.umass.edu. MALLET instance is represented by name, label, data, and source. However, there are two methods to import data into the MALLET format, as shown in the following list: Instance per file: Each file, that is, document, corresponds to an instance and MALLET accepts the directory name for the input. Instance per line: Each line corresponds to an instance, where the following format is assumed: the instance_name label token. Data will be a feature vector, consisting of distinct words that appear as tokens and their occurrence count. The library comprises the following packages: cc.mallet.classify: These are algorithms for training and classifying instances, including AdaBoost, bagging, C4.5, as well as other decision tree models, multivariate logistic regression, naive Bayes, and Winnow2. cc.mallet.cluster: These are unsupervised clustering algorithms, including greedy agglomerative, hill climbing, k-best, and k-means clustering. cc.mallet.extract: This implements tokenizers, document extractors, document viewers, cleaners, and so on. cc.mallet.fst: This implements sequence models, including conditional random fields, HMM, maximum entropy Markov models, and corresponding algorithms and evaluators. cc.mallet.grmm: This implements graphical models and factor graphs such as inference algorithms, learning, and testing. For example, loopy belief propagation, Gibbs sampling, and so on. cc.mallet.optimize: These are optimization algorithms for finding the maximum of a function, such as gradient ascent, limited-memory BFGS, stochastic meta ascent, and so on. cc.mallet.pipe: These are methods as pipelines to process data into MALLET instances. cc.mallet.topics: These are topics modeling algorithms, such as Latent Dirichlet allocation, four-level pachinko allocation, hierarchical PAM, DMRT, and so on. cc.mallet.types: This implements fundamental data types such as dataset, feature vector, instance, and label. cc.mallet.util: These are miscellaneous utility functions such as command-line processing, search, math, test, and so on. To design, build, and deploy your own machine learning applications by leveraging key Java machine learning libraries, check out this book Machine learning in Java, published by Packt Publishing. 5 JavaScript machine learning libraries you need to know A non programmer’s guide to learning Machine learning Why use JavaScript for machine learning?  
Read more
  • 0
  • 0
  • 20085

article-image-chatbot-toolkit-developers-design-develop-manage-conversational-ui
Bhagyashree R
10 Sep 2018
7 min read
Save for later

A chatbot toolkit for developers: design, develop, and manage conversational UI

Bhagyashree R
10 Sep 2018
7 min read
Although chatbots have been under development for at least a few decades, they did not become mainstream channels for customer engagement until recently. Due to serious efforts by industry giants like Apple, Google, Microsoft, Facebook, IBM, and Amazon, and their subsequent investments in developing toolkits, chatbots and conversational interfaces have become a serious contender to other customer contact channels. In this time, chatbots have been applied in various sectors and various conversational scenarios within sectors like retail, banking and finance, governmental, health, legal, and many more. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. This book is organized as eight chatbot projects that will introduce the ecosystem of tools, techniques, concepts, and even gadgets relating to conversational interfaces. Over the last few years, an ecosystem of tools and services has grown around the idea of conversational interfaces. There are a number of tools that we can plug and play to design, develop, and manage chatbots. Mockup tools Mockups can be used to show clients as to how a chatbot would look and behave. These are tools that you may want to consider using during conversation design, after coming up with sample conversations between the user and the bot. Mockup tools allow you to visualize the conversation between the user and the bot and showcase the dynamics of conversational turn-taking. Some of these tools allow you to export the mockup design and make videos. BotSociety.io and BotMock.com are some of the popular mockup tools. Channels in Chatbots Channels refer to places where users can interact with the chatbot. There are several deployment channels over which your bots can be exposed to users. These include Messaging services such as Facebook Messenger, Skype, Kik, Telegram, WeChat, and Line Office and team chat services such as Slack, Microsoft Teams, and many more Traditional channels such as the web chat, SMS, and voice calls Smart speakers such as Amazon Echo and Google Home. Choose the channel based on your users and the requirements of the project. For instance, if you are building a chatbot targeting consumers, Facebook Messenger can be the best channel because of the growing number of users who use the service already to keep in touch with friends and family. To add your chatbot to their contact list may be easier than getting them to download your app. If the user needs to interact with the bot using voice in a home or office environment, smart speaker channels can be an ideal choice. And finally, there are tools that can connect chatbots to many channels simultaneously (for example, Dialogflow integration, MS Bot Service, and Smooch.io, and so on). Chatbot development tools There are many tools that you can use to build chatbots without having to code even a single line: Chatfuel, ManyChat, Dialogflow, and so on. Chatfuel allows designers to create the conversational flow using visual elements. With ManyChat, you can build the flow using a visual map called the FlowBuilder. Conversational elements such as bot utterances and user response buttons can be configured using drag and drop UI elements. Dialogflow can be used to build chatbots that require advanced natural language understanding to interact with users. On the other hand, there are scripting languages such as Artificial Intelligence Markup Language (AIML), ChatScript, and RiveScript that can be used to build chatbots. These scripts will contain the conversational content and flow that then needs to be fed into an interpreter program or a rules engine to bring the chatbot to life. The interpreter decides how to progress the conversation by matching user utterances to templates in the scripts. While it is straightforward to build conversational chatbots using this approach, it becomes difficult to build transactional chatbots without generating explicit semantic representations of user utterances. PandoraBots is a popular web-based platform for building AIML chatbots. Alternatively, there are SDK libraries that one can use to build chatbots: MS Bot Builder, BotKit, BotFuel, and so on provide SDKs in one or more programming languages to assist developers in building the core conversational management module. The ability to code the conversational manager gives developers the flexibility to mold the conversation and integrate the bot to backend tasks better than no-code and scripting platforms. Once built, the conversation manager can then be plugged into other services such as natural language understanding to understand user utterances. Analytics in Chatbots Like other digital solutions, chatbots can benefit from collecting and analyzing their usage statistics. While you can build a bespoke analytics platform from scratch, you can also use off-the-shelf toolkits that are widely available now. Many off-the-shelf analytics toolkits are available that can be plugged into a chatbot, using which incoming and outgoing messages can be logged and examined. These tools tell chatbot builders and managers the kind of conversations that actually transpire between users and the chatbot. The data will give useful information such as the conversational tasks that are popular, places where conversational experience breaks down, utterances that the bot did not understand, and the requests which the chatbots still need to scale up to. Dashbot.io, BotAnalytics, and Google's Chatbase are a few analytic toolkits that you can use to analyze your chatbot's performance. Natural language understanding Chatbots can be built without having to understand utterances from the user. However, adding the natural language understanding capability is not very difficult. It is one of the hallmark features that sets chatbots apart from their digital counterparts such as websites and apps with visual elements. There are many natural language understanding modules that are available as cloud services. Major IT players like Google, Microsoft, Facebook, and IBM have created tools that you can plug into your chatbot. Google's Dialogflow, Microsoft LUIS, IBM Watson, SoundHound, and Facebook's Wit.ai are some of the NLU tools that you can try. Directory services One of the challenges of building the bot is to get users to discover and use it. Chatbots are not as popular as websites and mobile apps, so a potential user may not know where to look to find the bot. Once your chatbot is deployed, you need to help users find it. There are directories that list bots in various categories. Chatbots.org is one of the oldest directory services that has been listing chatbots and virtual assistants since 2008. Other popular ones are Botlist.co, BotPages, BotFinder, and ChatBottle. These directories categorize bots in terms of purpose, sector, languages supported, countries, and so on. In addition to these, channels such as Facebook and Telegram have their own directories for the bots hosted on their channel. In the case of Facebook, you can help users find your Messenger bot using their Discover service. Monetization Chatbots are built for many purposes: to create awareness, to support customers after sales, to provide paid services, and many more. In addition to all these, chatbots with interesting content can engage users for a long time and can be used to make some money through targeted personalized advertising. Services such as CashBot.ai and AddyBot.com can integrate with your chatbot to send targeted advertisements and recommendations to users, and when users engage, your chatbot makes money. In this article, we saw tools that can help you build a chatbot, collect and analyze its usage statistics, add features like natural language understanding, and many more. The aforementioned is not an exhaustive list of tools and nor are the services listed under each type. These tools are evolving over time as chatbots are finding their niche in the market. This list gives you an idea of how multidimensional the conversational UI ecosystem is and help you explore the space and feed your creative mind. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. How to build a chatbot with Microsoft Bot framework Facebook’s Wit.ai: Why we need yet another chatbot development framework? How to build a basic server side chatbot using Go
Read more
  • 0
  • 0
  • 4565

article-image-why-is-everyone-going-crazy-over-webassembly
Amarabha Banerjee
09 Sep 2018
4 min read
Save for later

Why is everyone going crazy over WebAssembly?

Amarabha Banerjee
09 Sep 2018
4 min read
The history of web has seen a few major events in the past three decades. One of them was the launch of JavaScript 22 years ago on December 4, 1995.  Since then JavaScript has slowly evolved to become the de-facto standard of front-end web development. The present day web is much more dynamic and data intensive. Heavy graphics based games and applications require a much more robust browser.  That is why developers are going crazy over the concept of WebAssembly. Is it here to replace JavaScript? Or is it like any other hype that will fade away with time? The answer is neither of the two. Why use WebAssembly when you have JavaScript? To understand the buzz around WebAssembly, we will have to understand what JavaScript does best and what its limitations are. JavaScript compiles into machine code as it runs in the browser. Machine code is the language that communicates with the PC and instructs it what to do. Not only that, it also parses, analyzes, and optimizes the Emscripten-generated JavaScript while loading the application. That’s what makes the browser slow in compute heavy applications. JavaScript is a dynamically typed language. It doesn’t have any stored functions in advance. That’s why when the compiler in your browser runs JavaScript, it doesn’t know which function call is going to come next. That might seem very inconvenient. But that feature is what makes JavaScript based browsers so intuitive, and interactive. This feature ensures that your system would not have to install a standalone desktop application. The same application can be run from the browser. Graphical Representation of an Assembler-Source: logrocket The above image shows how an assembly level language is transformed into machine code when it is compiled. This is what exactly happens when WebAssembly code runs in browser. But since WebAssembly is in binary format, it becomes much easier for the compiler to convert it into machine code. Unfortunately JavaScript is not suitable for every single application. For example, gaming is an area, where running JavaScript code in the browser for a highly interactive multiplayer game is not the best solution. It takes a heavy toll on the system resources. That’s where WebAssembly comes in. WebAssembly is a low level binary language that runs parallel to JavaScript. Its biggest advantages are speed, portability and flexibility. The speed comes from the fact that Webassembly is in binary. JavaScript is a high level language. Compiling that to the machine code puts significant pressure on the JavaScript engine. Compared to that, WebAssembly binary files are much smaller in size (in Kb) and easy to execute and convert to machine code. Functioning of a WASM: Source: logrocket The code optimization in WebAssembly happens during the compilation of source code, unlike JavaScript. WebAssembly manages memory manually, just like in languages like C and C++, so there’s no garbage collection either. This enables code compiler performance similar to native code. You can also compile other languages like Rust, C, C++ into WASM format. This enables developers to run their native code in the browser without knowing much of JavaScript. WASM is not something that you can write as a code. It’s a format which is created from your native code, that transcompiles directly into machine code. This allows it to run parallel to HTML5, CSS and JavaScript code, giving you the taste of both worlds. So, is WebAssembly going to replace JavaScript? JavaScript is clearly not replaceable. Just that for heavy graphics/ audio/ AI based apps, a lot of function calls are made in the browser. This makes the browser slow. WebAssembly eases out this aspect. There are separate compilers that can turn your C, C++, Rust code into WASM code. These are then used in the browser as JavaScript objects. Since these are very small in size, they make the application fast. Support for WebAssembly has been rolled out by all major browsers. Majority of the world is using WebAssembly currently in their browsers. Until JavaScript capabilities improve, WebAssembly will work alongside Javascript to make your apps perform better and in making your browser interactive, intuitive and lightweight. Golang 1.11 rc1 is here with experimental port for WebAssembly! Unity switches to WebAssembly as the output format for the Unity WebGL build target Introducing Life: A cross-platform WebAssembly VM for decentralized Apps written in Go  Grain: A new functional programming language that compiles to Webassembly
Read more
  • 0
  • 0
  • 7639
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-is-the-machine-learning-process-similar-to-how-humans-learn
Fatema Patrawala
09 Sep 2018
12 min read
Save for later

Is the machine learning process similar to how humans learn?

Fatema Patrawala
09 Sep 2018
12 min read
A formal definition of machine learning proposed by computer scientist Tom M. Mitchell states that a machine learns whenever it is able to utilize an experience such that its performance improves on similar experiences in the future. Although this definition is intuitive, it completely ignores the process of exactly how experience can be translated into future action—and of course, learning is always easier said than done! While human brains are naturally capable of learning from birth, the conditions necessary for computers to learn must be made explicit. For this reason, although it is not strictly necessary to understand the theoretical basis of learning, this foundation helps to understand, distinguish, and implement machine learning algorithms. This article is taken from the book Machine learning with R - Second Edition, written by Brett Lantz. Regardless of whether the learner is a human or machine, the basic learning process is similar. It can be divided into four interrelated components: Data storage utilizes observation, memory, and recall to provide a factual basis for further reasoning. Abstraction involves the translation of stored data into broader representations and concepts. Generalization uses abstracted data to create knowledge and inferences that drive action in new contexts. Evaluation provides a feedback mechanism to measure the utility of learned knowledge and inform potential improvements. The following figure illustrates the steps in the learning process: Keep in mind that although the learning process has been conceptualized as four distinct components, they are merely organized this way for illustrative purposes. In reality, the entire learning process is inextricably linked. In human beings, the process occurs subconsciously. We recollect, deduce, induct, and intuit with the confines of our mind's eye, and because this process is hidden, any differences from person to person are attributed to a vague notion of subjectivity. In contrast, with computers these processes are explicit, and because the entire process is transparent, the learned knowledge can be examined, transferred, and utilized for future action. Data storage for advanced reasoning All learning must begin with data. Humans and computers alike utilize data storage as a foundation for more advanced reasoning. In a human being, this consists of a brain that uses electrochemical signals in a network of biological cells to store and process observations for short- and long-term future recall. Computers have similar capabilities of short- and long-term recall using hard disk drives, flash memory, and random access memory (RAM) in combination with a central processing unit (CPU). It may seem obvious to say so, but the ability to store and retrieve data alone is not sufficient for learning. Without a higher level of understanding, knowledge is limited exclusively to recall, meaning exclusively what is seen before and nothing else. The data is merely ones and zeros on a disk. They are stored memories with no broader meaning. To better understand the nuances of this idea, it may help to think about the last time you studied for a difficult test, perhaps for a university final exam or a career certification. Did you wish for an eidetic (photographic) memory? If so, you may be disappointed to learn that perfect recall is unlikely to be of much assistance. Even if you could memorize material perfectly, your rote learning is of no use, unless you know in advance the exact questions and answers that will appear in the exam. Otherwise, you would be stuck in an attempt to memorize answers to every question that could conceivably be asked. Obviously, this is an unsustainable strategy. Instead, a better approach is to spend time selectively, memorizing a small set of representative ideas while developing strategies on how the ideas relate and how to use the stored information. In this way, large ideas can be understood without needing to memorize them by rote. Abstraction of stored data This work of assigning meaning to stored data occurs during the abstraction process, in which raw data comes to have a more abstract meaning. This type of connection, say between an object and its representation, is exemplified by the famous René Magritte painting The Treachery of Images: Source: http://collections.lacma.org/node/239578 The painting depicts a tobacco pipe with the caption Ceci n'est pas une pipe ("this is not a pipe"). The point Magritte was illustrating is that a representation of a pipe is not truly a pipe. Yet, in spite of the fact that the pipe is not real, anybody viewing the painting easily recognizes it as a pipe. This suggests that the observer's mind is able to connect the picture of a pipe to the idea of a pipe, to a memory of a physical pipe that could be held in the hand. Abstracted connections like these are the basis of knowledge representation, the formation of logical structures that assist in turning raw sensory information into a meaningful insight. During a machine's process of knowledge representation, the computer summarizes stored raw data using a model, an explicit description of the patterns within the data. Just like Magritte's pipe, the model representation takes on a life beyond the raw data. It represents an idea greater than the sum of its parts. There are many different types of models. You may be already familiar with some. Examples include: Mathematical equations Relational diagrams such as trees and graphs Logical if/else rules Groupings of data known as clusters The choice of model is typically not left up to the machine. Instead, the learning task and data on hand inform model selection. The process of fitting a model to a dataset is known as training. When the model has been trained, the data is transformed into an abstract form that summarizes the original information. It is important to note that a learned model does not itself provide new data, yet it does result in new knowledge. How can this be? The answer is that imposing an assumed structure on the underlying data gives insight into the unseen by supposing a concept about how data elements are related. Take for instance the discovery of gravity. By fitting equations to observational data, Sir Isaac Newton inferred the concept of gravity. But the force we now know as gravity was always present. It simply wasn't recognized until Newton recognized it as an abstract concept that relates some data to others—specifically, by becoming the g term in a model that explains observations of falling objects. Most models may not result in the development of theories that shake up scientific thought for centuries. Still, your model might result in the discovery of previously unseen relationships among data. A model trained on genomic data might find several genes that, when combined, are responsible for the onset of diabetes; banks might discover a seemingly innocuous type of transaction that systematically appears prior to fraudulent activity; and psychologists might identify a combination of personality characteristics indicating a new disorder. These underlying patterns were always present, but by simply presenting information in a different format, a new idea is conceptualized. Generalization for future action The learning process is not complete until the learner is able to use its abstracted knowledge for future action. However, among the countless underlying patterns that might be identified during the abstraction process and the myriad ways to model these patterns, some will be more useful than others. Unless the production of abstractions is limited, the learner will be unable to proceed. It would be stuck where it started—with a large pool of information, but no actionable insight. The term generalization describes the process of turning abstracted knowledge into a form that can be utilized for future action, on tasks that are similar, but not identical, to those it has seen before. Generalization is a somewhat vague process that is a bit difficult to describe. Traditionally, it has been imagined as a search through the entire set of models (that is, theories or inferences) that could be abstracted during training. In other words, if you can imagine a hypothetical set containing every possible theory that could be established from the data, generalization involves the reduction of this set into a manageable number of important findings. In generalization, the learner is tasked with limiting the patterns it discovers to only those that will be most relevant to its future tasks. Generally, it is not feasible to reduce the number of patterns by examining them one-by-one and ranking them by future utility. Instead, machine learning algorithms generally employ shortcuts that reduce the search space more quickly. Toward this end, the algorithm will employ heuristics, which are educated guesses about where to find the most useful inferences. Heuristics are routinely used by human beings to quickly generalize experience to new scenarios. If you have ever utilized your gut instinct to make a snap decision prior to fully evaluating your circumstances, you were intuitively using mental heuristics. The incredible human ability to make quick decisions often relies not on computer-like logic, but rather on heuristics guided by emotions. Sometimes, this can result in illogical conclusions. For example, more people express fear of airline travel versus automobile travel, despite automobiles being statistically more dangerous. This can be explained by the availability heuristic, which is the tendency of people to estimate the likelihood of an event by how easily its examples can be recalled. Accidents involving air travel are highly publicized. Being traumatic events, they are likely to be recalled very easily, whereas car accidents barely warrant a mention in the newspaper. The folly of misapplied heuristics is not limited to human beings. The heuristics employed by machine learning algorithms also sometimes result in erroneous conclusions. The algorithm is said to have a bias if the conclusions are systematically erroneous, or wrong in a predictable manner. For example, suppose that a machine learning algorithm learned to identify faces by finding two dark circles representing eyes, positioned above a straight line indicating a mouth. The algorithm might then have trouble with, or be biased against, faces that do not conform to its model. Faces with glasses, turned at an angle, looking sideways, or with various skin tones might not be detected by the algorithm. Similarly, it could be biased toward faces with certain skin tones, face shapes, or other characteristics that do not conform to its understanding of the world. In modern usage, the word bias has come to carry quite negative connotations. Various forms of media frequently claim to be free from bias, and claim to report the facts objectively, untainted by emotion. Still, consider for a moment the possibility that a little bias might be useful. Without a bit of arbitrariness, might it be a bit difficult to decide among several competing choices, each with distinct strengths and weaknesses? Indeed, some recent studies in the field of psychology have suggested that individuals born with damage to portions of the brain responsible for emotion are ineffectual in decision making, and might spend hours debating simple decisions such as what color shirt to wear or where to eat lunch. Paradoxically, bias is what blinds us from some information while also allowing us to utilize other information for action. It is how machine learning algorithms choose among the countless ways to understand a set of data. Evaluate the learner’s success Bias is a necessary evil associated with the abstraction and generalization processes inherent in any learning task. In order to drive action in the face of limitless possibility, each learner must be biased in a particular way. Consequently, each learner has its weaknesses and there is no single learning algorithm to rule them all. Therefore, the final step in the generalization process is to evaluate or measure the learner's success in spite of its biases and use this information to inform additional training if needed. Generally, evaluation occurs after a model has been trained on an initial training dataset. Then, the model is evaluated on a new test dataset in order to judge how well its characterization of the training data generalizes to new, unseen data. It's worth noting that it is exceedingly rare for a model to perfectly generalize to every unforeseen case. In parts, models fail to perfectly generalize due to the problem of noise, a term that describes unexplained or unexplainable variations in data. Noisy data is caused by seemingly random events, such as: Measurement error due to imprecise sensors that sometimes add or subtract a bit from the readings Issues with human subjects, such as survey respondents reporting random answers to survey questions, in order to finish more quickly Data quality problems, including missing, null, truncated, incorrectly coded, or corrupted values Phenomena that are so complex or so little understood that they impact the data in ways that appear to be unsystematic Trying to model noise is the basis of a problem called overfitting. Because most noisy data is unexplainable by definition, attempting to explain the noise will result in erroneous conclusions that do not generalize well to new cases. Efforts to explain the noise will also typically result in more complex models that will miss the true pattern that the learner tries to identify. A model that seems to perform well during training, but does poorly during evaluation, is said to be overfitted to the training dataset, as it does not generalize well to the test dataset. Solutions to the problem of overfitting are specific to particular machine learning approaches. For now, the important point is to be aware of the issue. How well the models are able to handle noisy data is an important source of distinction among them. We saw that machine learning process is similar to how humans learn in their daily lives.To To discover how to build machine learning algorithms, prepare data, and dig deep into data prediction techniques with R, check out this book Machine learning with R - Second edition. A Machine learning roadmap for Web Developers Why TensorFlow always tops machine learning and artificial intelligence tool surveys Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 8019

article-image-messaging-app-telegram-updated-privacy-policy-open-challenge
Amarabha Banerjee
08 Sep 2018
7 min read
Save for later

Messaging app Telegram's updated Privacy Policy is an open challenge

Amarabha Banerjee
08 Sep 2018
7 min read
Social media companies are facing a lot of heat presently because of their privacy issues. One of them is Facebook. The Cambridge analytica scandal had even prompted a senate hearing for Mark Zuckerberg. On the other end of this spectrum, there is another messaging app known as Telegram, registered in London, United Kingdom, founded by the Russian entrepreneur Pavel Durov. Telegram has been in the news for an absolutely opposite situation. It’s often touted as one of the most secure and secretive messaging apps. The end to end encryption ensures that security agencies across the world have a tough time getting access to any suspicious piece of information. For this reason Russia has banned the use of Telegram app on April 2018. Telegram updated their privacy policies on . These updates have further ensured that Telegram will retain the title of the most secure messaging application in the planet. It’s imperative for any messaging app to get access to our data. But how they choose to use it makes you either vulnerable or secure. Telegram in their latest update have stated that they process personal data on the grounds that such processing caters to the following two goals: Providing effective and innovative Services to our users To detect, prevent or otherwise address fraud or security issues in respect of their provision of Services. The caveat for the second point being the security interests shall not override the space of fundamental rights and freedoms that require protection of personal data. This clause is an excellent example on how applications can prove to be a torchbearer for human rights and basic human privacy amidst glaring loopholes. Telegram have listed the the kind of user data accessed by the app. They are as follows: Basic Account Data Telegram stores basic account user data that includes mobile number, profile name, profile picture and about information, which are needed  to create a Telegram account. The most interesting part of this is Telegram allows you to only keep your username (if you choose to) public. The people who have you in their contact list will see you as you want them to - for example you might be a John Doe in public, but your mom will still see you as ‘Dear Son’ in their contacts. Telegram doesn’t require your real name, gender, age or even your screen name to be your real name. E-mail Address When you enable 2-step-verification for your account or store documents using the Telegram Passport feature, you can opt to set up a password recovery email. This address will only be used to send you a password recovery code if you forget it. They are particular about not sending any unsolicited marketing emails to you. Personal Messages Cloud Chats Telegram stores messages, photos, videos and documents from your cloud chats on their servers so that you can access your data from any of your devices anytime without having to rely on third-party backups. All data is stored heavily encrypted and the encryption keys in each case are stored in several other data centers in different jurisdictions. This way local engineers or physical intruders cannot get access to user data. Secret Chats Telegram has a feature called Secret chats that uses end-to-end encryption. This means that all data is encrypted with a key that only the sender and the recipients know. There is no way for us or anybody else without direct access to your device to learn what content is being sent in those messages. Telegram does not store ‘secret chats’ on their servers. They also do not keep any logs for messages in secret chats, so after a short period of time there is no way of determining who or when you messaged via secret chats. Secret chats are not available in the cloud — you can only access those messages from the device they were sent to or from. Media in Secret Chats When you send photos, videos or files via secret chats, before being uploaded, each item is encrypted with a separate key, not known to the server. This key and the file’s location are then encrypted again, this time with the secret chat’s key — and sent to your recipient. They can then download and decipher the file. This means that the file is technically on one of Telegram’s servers, but it looks like a piece of random indecipherable garbage to everyone except for you and the recipient. This complete process is random and there random data packets are periodically purged from the storage disks too. Public Chats In addition to private messages, Telegram also supports public channels and public groups. All public chats are cloud chats. Like everything else on Telegram, the data you post in public communities is encrypted, both in storage and in transit — but everything you post in public will be accessible to everyone. Phone Number and Contacts Telegram uses phone numbers as unique identifiers so that it is easy for you to switch from SMS and other messaging apps and retain your social graph. But the most important thing is that permissions from the users are a must before the cookies are allowed into your browser. Cookies Telegram promises that the only cookies they use are those to operate and provide their Services on the web. They clearly state that they don’t use cookies for profiling or advertising. Their cookies are small text files that allow them to provide and customize their Services, and provide an enhanced user experience. Also, whether or not to use these cookies is a choice made by the users. So, how does Telegram remain in business? The Telegram business model doesn’t match that of a revenue generating service. The founder Pavel Durov is also the founder of the popular Russian social networking site VK. Telegram doesn’t charge for any messaging services, it doesn’t show ads yet. Some new in app purchase features might be included in the new version. As of now, the main source of revenue for Telegram are donations and mainly the earnings of Pavel Durov himself (from the social networking site VK). What can social networks learn from Telegram? Telegram’s policies elevate privacy standards that many are asking from other social messaging apps. The clamour for stopping the exploitation of user data, using their location details for targeted marketing and advertising campaigns is increasing now. Telegram shows that privacy can be achieved, if intended, in today’s overexposed social media world. But there is are also costs to this level of user privacy and secrecy, that are sometimes not discussed enough. The ISIS members behind the 2015 Paris attacks used Telegram to spread propaganda. ISIS also used the app to recruit the perpetrators of the Christmas market attack in Berlin last year and claimed credit for the massacre. More recently, a Turkish prosecutor found that the shooter behind the New Year’s Eve attack at the Reina nightclub in Istanbul used Telegram to receive directions for it from an ISIS leader in Raqqa. While these incidents can never negate the need for a secure and less intrusive social media platform like Telegram, there should be workarounds and escape routes designed for stopping extremists and terrorist activities. Telegram have assured that all ISIS messaging channels are deleted from their network which is a great way to start. Content moderation, proactive sentiment and pattern recognition and content/account isolation are the next challenges for Telegram. One thing is for sure, Telegram’s continual pursuance of user secrey and user data privacy is throwing an open challenge to others to follow suite. Whether others will oblige or not, only time will tell. To read about Telegram’s updated privacy policies in detail, you can check out the official Telegram Privacy Settings. How to stay safe while using Social Media Time for Facebook, Twitter and other social media to take responsibility or face regulation What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies
Read more
  • 0
  • 0
  • 5241

article-image-how-to-secure-your-crypto-currency
Guest Contributor
08 Sep 2018
8 min read
Save for later

How to secure your crypto currency

Guest Contributor
08 Sep 2018
8 min read
Managing and earning cryptocurrency is a lot of hassle and losing it is a lot like losing yourself. While security of this blockchain based currency is a major concern, here is what you can do to secure your crypto fortune. With the ever fluctuating crypto-rates, every time, it’s now or never. While Bitcoin climbed up to $17,900 in the past, the digital currency frenzy is always in-trend and its security is crucial. No crypto geek wants to lose their currency due to malicious activities, negligence or any other reason. Before we delve into securing our crypto currencies, lets discuss the structure and strategy of this crypto vault that ensures the absolute security of a blockchain based digital currency. Why blockchains are secure, at least, in theory Below are the three core elements that contribute in making blockchain a fool proof digital technology.        Public key cryptography        Hashing        Digital signatures Public Key Cryptography This cryptography involves two distinctive keys i.e., private and public keys. Both keys decrypt and encrypt data asymmetrically. Both have simultaneous dependency of data which is encrypted by a private key and can only be decrypted with the public key. Similarly, data decrypted by public key can only be decrypted by a private key. Various cryptography schemes including TLS (Transport Layer Security protocol) and SSL (Secure Sockets Layer) have this system at its core. The strategy works well with you putting in your public key into the world of blockchain and keeping your private key confidential, not revealing it on any platform or place. Hashing Also called a digest, the hash of a message gets calculated on the basis of the contents of a message. The hashing algorithm generates a hash that is created deterministically. Data of an arbitrary length acts an input to the hashing algorithm. The outcome of this complex process is known as a calculated amount of hash with a predefined length. Due to its deterministic nature, the input and output are the same. Considering mathematical calculations, it’s easy to convert a message into hash but when it comes to obtaining an original message from hash, it is tediously difficult. Digital Signatures A digital signature is an encrypted form of hash of a message and is an outcome of a private key. Anyone who has the access to the public key can break into the digital signature by decrypting it and this can be used to get the original hash. Anyone who can read the message can calculate the hash of a message on its own. The independently calculated hash can be compared with the decrypted hash to ensure both the hashes are the same. If they both match, it is a confirmation that the message remains unaltered from creation to reception. Additionally, it is a sign of a relating private key digitally signing the message. A hash is extracted from a message and if a message gets altered, it will produce a different type of hash. Note that it is complex to reverse the process to find the message of a hash but it’s easy to compute the hash of a message. A hash that is encrypted by a private key is known as digital signature. Anyone having a public key can decrypt a digital signature and they have the ability to compare the digital signature with a calculated hash of the message. If the value of an original message is active and the message is signed by the entity having the private key, it means that the hashes are identical. What are Crypto wallets and transactions Every crypto-wallet is a combined collection of single or more wallets. A crypto-wallet is a private key and it can create a public key too. By using a public key, a public wallet address can be easily created. This makes a cryptocurrency wallet a set of private keys. To enable sharing wallet address with the public, they are converted into QR codes eliminated the need to maintain secrecy. One can always show QR codes to the world without any hesitation and anyone can send cryptocurrency using that wallet address. However, a cryptocurrency transaction needs a private key and currency sent into a wallet is owned by the owner of the wallet. In order to transact using cryptocurrency, a transaction is created that is public information. A transaction of crypto currency is a collection of information a blockchain needs. The only needed data for a transaction is the destination wallet’s address and the desired amount to be transferred. While anyone can transact in cryptocurrency, the transactions are only permitted by the blockchain if it is assured by multiple members in the network. A transaction should be digitally signed by a private key in order to get a valid status or else, it would be treated as invalid. In other words, one signs a transaction with the private key and then it gets to the blockchain. Once the blockchain accepts the key by confirming the public key data, it gets included in the blockchain that validates the transaction. Why you should guard your private key An attack on your private key is an attempt to steal your cryptocurrency. By using your private keys, an attacker attempts to digitally sign transactions from your wallet address to their address. Moreover, an attacker can destroy your private keys thus ending your access to your crypto wallet. What are some risk factors involved in owning a crypto wallet Before we move on to creating a security wall around our crypto currency, it is important to know from whom we are protecting our digital currency or who can prove to be a threat for our crypto wallets. If you lose the access to your crypto currency, you have lost it all as there isn’t any ledger with a centralized authority and once you lose the access, you can't regain it by any means. Since a crypto wallet is paired by a private and public key, losing the private key means losing your wallet. In other words, you don’t own any cryptocurrency. This is the very first and foremost threat. The next in line threat is what we hear often. Attackers, hackers or attempters who want to gain access to our cryptocurrency. The malfunctions may be opportunist or they may have their private intentions. Threats for your cryptocurrency Opportunist hackers are low profile attackers who get access to your laptop for transacting money to their public wallet address. Opportunist hackers doesn’t attack or target a person specifically, but if they get access to your crypto currency, they won’t shy away from taking your digital cash. Dedicated attackers, on the other hand, target single handedly or they may be in a group of hackers who work together for a sole purpose that is – stealing cryptocurrency. Their targets include every individual, crypto trader or even a crypto exchange. They initiate phishing campaigns and before executing the attack, they get well-versed with their target by conducting a pre-research. Level 2 attackers go for a broader approach and write malicious code that may steal private keys from a system if it gets attacked or infected. Another kind of hackers are backed by nation states. They are a collective group of people with top level coordination and established financials. They are motivated by gaining access to finances or their will. The crypto currency attacks by Lazarus Group, backed by the North Korea, are an example. How to Protect Your crypto wallet Regardless of the kind of threat, it is you and your private key that needs to be secured. Here’s how to ensure maximum security of your cryptocurrency. Throw away your access keys and you will lose your cryptocurrency forever. Obviously, you won’t do it ever and since the aforementioned thought came into your mind after reading the phrase, here are some other ways to secure your cryptocurrency fortune.       Go through the complete password recovery process. This means going through the process of forgetting the password and creating a multi-factor token. These measures should be taken while setting up a new hosted wallet or else, be prepared to lose it all.       No matter how fast the tech world progresses, basics will remain the same. You should have a printed paper backup of your keys and they should be placed in a secure location such as a bank’s locker or in a personal safe vault. Don’t forget to wipe out the printer’s memory after you are done with printing as printed files can be restored and re used to hack your digital money.       Do not keeps those keys with you nor should you be hiding those keys in a closet that can get damaged due to fire, theft, etc.       If your wallet has multi-signature enabled on it and has two public or private keys for the authorization of transactions, make it to three keys. While the third key will be controlled by an entrusted party, it will help you in the absence of a second person. About Author Tahha Ashraf is a Digital Content Producer at Cubix, a mobile app development company. He is a Certified Hubspot inbound and content marketer. He loves talking about brands, tech, blockchain and content marketing. Along with writing for the online fraternity on a variety of topics, he is fond of creativity and writes poetry in his free time. Cryptocurrency-based firm, Tron acquires BitTorrent Can Cryptocurrency establish a new economic world order? Akon is planning to create a cryptocurrency city in Senegal    
Read more
  • 0
  • 0
  • 3220

article-image-machine-learning-as-a-service-mlaas-how-google-cloud-platform-microsoft-azure-and-aws-are-democratizing-artificial-intelligence
Bhagyashree R
07 Sep 2018
13 min read
Save for later

Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence

Bhagyashree R
07 Sep 2018
13 min read
There has been a huge shift in the way that businesses build technology in recent years driven by a move towards cloud and microservices. Public cloud services like AWS, Microsoft Azure, and Google Cloud Platform are transforming the way companies of all sizes understand and use software. Not only do public cloud services reduce the resourcing costs associated with on site server resources, they also make it easier to leverage cutting edge technological innovations like machine learning and artificial intelligence. Cloud is giving rise to what’s known as ‘Machine Learning as a Service’ - a trend that could prove to be transformative for organizations of all types and sizes. According to a report published on Research and Markets, Machine Learning as a Service is set to face a compound annual growth rate (CAGR) of 49% between 2017 and 2023. The main drivers of this growth include the increased application of advanced analytics in manufacturing, the high volume of structured and unstructured data, and the integration of machine learning with big data. Of course, with machine learning a relatively new area for many businesses, demand for MLaaS is ultimately self-fulfilling - if it’s there and people can see the benefits it can bring, demand is only going to continue. But it’s important not to get fazed by the hype. Plenty of money will be spent on cloud based machine learning products that won’t help anyone but the tech giants who run the public clouds. With that in mind, let’s dive deeper into Machine Learning as a Service and what the biggest cloud vendors offer. What does Machine Learning as a Service (MLaaS) mean? Machine learning as a Service (MLaaS) is an array of services that provides machine learning tools to users. Businesses and developers can incorporate a machine learning model into their application without having to work on its implementation. These services range from data visualization, facial recognition, natural language processing, chatbots, predictive analytics and deep learning, among others. Typically, for a given machine learning task, a user has to perform various steps. These steps include data preprocessing, feature identification, implementing the machine learning model, and training the model. MLaaS services simplify this process by only exposing a subset of the steps to the user while automatically managing the remaining steps. Some services can also provide 1-click mode, where the users does not have to perform any of the steps mentioned earlier. What type of businesses can benefit from Machine Learning as a Service? Large companies Large companies can afford to hire expert machine learning engineers and data scientists, but they still have to build and manage their own custom machine learning model. This is time-intensive and complicated process. By leveraging MLaaS services these companies can use pre-trained machine learning models via APIs that perform specific tasks and save time. Small and mid-sized businesses Big companies can invest in their own machine learning solutions because they have the resources. For small and mid-sized businesses (SMBs), however, this simply isn’t the case. Fortunately, MLaaS changes all that and makes machine learning accessible to organizations with resource limitations. By using MLaaS, businesses can leverage machine learning without the huge investment in infrastructure or talent. Whether it’s for smarter and more intelligent customer-facing apps, or improved operational intelligence and automation, this could bring huge gains for a reasonable amount of spending. What types of roles will benefit from MLaaS? Machine learning can contribute to any kind of app development provided you have data to train your app. However, adding AI features to your app is not easy. As a developer, you’ve to worry about a lot of other factors besides regular app development checklist, in order to make your app intelligent. Some of them are: Data preprocessing Model training Model evaluation Predictions Expertise in data science The development tools provided by MLaaS can simplify these tasks allowing you to easily embed machine learning in your applications. Developers can build quickly and efficiently with MLaaS offerings, because they have access to pre-built algorithms and models that would take them extensive resources to build otherwise. MLaaS can also support data scientists and analysts. While most data scientists should have the necessary skills to build and train machine learning models from scratch, it can nevertheless still be a time consuming task. MLaaS can, as already mentioned, simplify the machine learning engineering process, which means data scientists can focus on optimizations that require more thought and expertise. Top machine learning as a service (MLaaS) providers Amazon Web Services (AWS), Azure, and Google, all have MLaaS products in their cloud offerings. Let’s take a look at them. Google Cloud AI at a glance Google Cloud AI Google’s Cloud AI provides modern machine learning services. It consists of pre-trained models and a service to generate your own tailored models. The services provided are fast, scalable, and easy to use. The following are the services that Google provides at an unprecedented scale and speed to your applications: Cloud AutoML Beta It is a suite of machine learning products, with the help of which developers with limited machine learning expertise can train high-quality models specific to their business needs. It provides you a simple GUI to train, evaluate, improve, and deploy models based on your own data. Read also: AmoebaNets: Google’s new evolutionary AutoML Google Cloud Machine Learning (ML) Engine Google Cloud Machine Learning Engine is a service that offers training and prediction services to enable developers and data scientists to build superior machine learning models and deploy in production. You don’t have to worry about infrastructure and can instead focus on the model development and deployment. It offers two types of predictions: Online prediction deploys ML models with serverless, fully managed hosting that responds in real time with high availability. Batch predictions is cost-effective and provides unparalleled throughput for asynchronous applications. Read also: Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Google BigQuery It is a cloud data warehouse for data analytics. It uses SQL and provides Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers to make integration fast and easy. It provides benefits like auto scaling and high-performance streaming to load data. You can create amazing reports and dashboards using your favorite BI tool, like Tableau, MicroStrategy, Looker etc. Read also: Getting started with Google Data Studio: An intuitive tool for visualizing BigQuery Data Dialogflow Enterprise Edition Dialogflow is an end-to-end, build-once deploy-everywhere development suite for creating conversational interfaces for websites, mobile applications, popular messaging platforms, and IoT devices. Dialogflow Enterprise Edition users have access to Google Cloud Support and a service level agreement (SLA) for production deployments. Read also: Google launches the Enterprise edition of Dialogflow, its chatbot API Cloud Speech-to-Text Google Cloud Speech-to-Text allows you to convert speech to text by applying neural network models. 120 languages are supported by the API, which will help you extend your user base. It can process both real-time streaming and prerecorded audio. Read also: Google announce the largest overhaul of their Cloud Speech-to-Text Microsoft Azure AI at a glance The Azure platform consists of various AI tools and services that can help you build smart applications. It provides Cognitive Services and Conversational AI with Bot tools, which facilitate building custom models with Azure Machine Learning for any scenario. You can run AI workloads anywhere at scale using its enterprise-grade AI infrastructure The following are services provided by Azure AI to help you achieve maximum productivity and reliability: Pre-built services You need not be an expert in data science to make your systems more intelligent and engaging. The pre-built services come with high-quality RESTful intelligent APIs for the following: Vision: Make your apps identify and analyze content within images and videos. Provides capabilities such as, image classification, optical character recognition in images, face detection, person identification, and emotion identification. Speech: Integrate speech processing capabilities in your app or services such as, text-to-speech, speech-to-text, speaker recognition, and speech translation. Language: Your application or service will understand meaning of the unstructured text or the intent behind a speaker's utterances. It comes with capabilities such as, text sentiment analysis, key phrase extraction, automated and customizable text translation. Knowledge: Create knowledge rich resources that can be integrated into apps and services. It provides features such as, QnA extraction from unstructured text, knowledge base creation from collections of Q&As, and semantic matching for knowledge bases. Search: Using Search API you can find exactly what you are looking for across billions of web pages. It provides features like, ad-free, safe, location-aware web search, Bing visual search, custom search engine creation, and many more. Custom services Azure Machine Learning is a fully managed cloud service which helps you to easily prepare data, build, and train your own models: You can rapidly prototype on your desktop, then scale up on VMs or scale out using Spark clusters. You can manage model performance, identify the best model, and promote it using data-driven insight. Deploy and manage your models everywhere. Using Docker containers, you can deploy the models into production faster in the cloud, on-premises or at the edge. Promote your best performing models into production and retrain them whenever necessary. Read also: Microsoft supercharges its Azure AI platform with new features AWS machine learning services at a glance Machine learning services provided by AWS help developers to easily add intelligence to any application with pre-trained services. For training and inferencing, it offers a broad array of compute options with powerful GPU-based instances, compute and memory optimized instances, and even FPGAs. You will get to choose from a set of services for data analysis including data warehousing, business intelligence, batch processing, stream processing, and data workflow orchestration. The following are the services provided by AWS: AWS machine learning applications Amazon Comprehend: This is a natural language processing (NLP) service that identifies relationships and finds insights in text using machine learning. It recognizes the language of the text and understands how positive or negative it is and extracts key phrases, places, people, brands, or events. It then analyzes text using tokenization and parts of speech, and automatically organizes a collection of text files by topic. Amazon Lex: This service provides the same deep learning technologies used by Amazon Alexa to developers in helping them build sophisticated, natural language, conversational bots easily. It comes with advanced deep learning functionalities like, automatic speech recognition (ASR) and natural language understanding (NLU) to facilitate a more life like conversational interaction with the users. Amazon Polly: This text-to-speech service produces speech that sounds like human voice using advanced deep learning technologies. It provides you dozens of life like voices across a variety of languages. You can simply select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Rekognition: This service can identify the objects, people, text, scenes, and activities, and any inappropriate content in an image or a video. It also provides highly accurate facial analysis and facial recognition on images and video. Read also: AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers AWS machine learning platforms Amazon SageMaker: It is a platform that solves the complexities in the machine learning process, from building to deploying a model. It is a fully-managed platform that helps developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. AWS DeepLens: It is a fully programmable video camera, which comes with tutorials, code, and pre-trained models designed to expand deep learning skills. It provides you sample projects giving you practical and hands-on experience in deep learning in less than 10 minutes. Models trained in Amazon SageMaker can be sent to AWS DeepLens with just a few clicks from the AWS Management Console. Amazon ML: This is a service that provides visualization tools and wizards that direct you to create a machine learning model without having to learn complex ML algorithms and technology. Using simple APIs it makes it easy for you to obtain predictions for your application. It is highly scalable and can generate billions of predictions daily, and serve those predictions in real-time and at high throughput Read also: Amazon Sagemaker makes machine learning on the cloud easy. Deep Learning on AWS AWS Deep Learning AMIs: This provides the infrastructure and tools to accelerate deep learning in the cloud, at any scale. To train sophisticated, custom AI models, or to experiment with new algorithms you can quickly launch Amazon EC2 instances which are pre-installed in popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras. Apache MXNet on AWS: This is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning. It allows developers of all skill levels to get started with deep learning on the cloud, on edge devices, and mobile apps using Gluon. You can build linear regressions, convolutional networks and recurrent LSTMs for object detection, speech recognition, recommendation, and personalization, in just a few lines of Gluon code. TensorFlow on AWS: You can quickly and easily get started with deep learning in the cloud using TensorFlow. AWS provides you a fully-managed TensorFlow experience with Amazon SageMaker. You can also use the AWS Deep Learning AMIs to build custom environment and workflow with TensorFlow and other popular frameworks such as Apache MXNet and Gluon, Caffe, Caffe2, Chainer, Torch, Keras, and Microsoft Cognitive Toolkit. Conclusion Machine learning and artificial intelligence can be expensive - skills and resources can cost a lot. For that reason, MLaaS is going to be a hugely influential development within cloud. Yes, the range of services on offer are impressive from AWS, Azure and GCP, but it’s really the ease and convenience that is most remarkable. With these services it’s easy to set up and run machine learning algorithms that enhance business processes and operations, customer interactions and overall business strategy. You don’t need a PhD, and you don’t need to code algorithms from scratch. The MLaaS market will likely continue to grow as more companies realise the potential machine learning has on their business - however, whether anyone can deliver a better set of services than the established cloud providers remains to be seen. Predictive Analytics with AWS: A quick look at Amazon ML Microsoft supercharges its Azure AI platform with new features AmoebaNets: Google’s new evolutionary AutoML
Read more
  • 0
  • 0
  • 8422
article-image-how-artificial-intelligence-and-machine-learning-can-turbocharge-a-game-developers-career
Guest Contributor
06 Sep 2018
7 min read
Save for later

How Artificial Intelligence and Machine Learning can turbocharge a Game Developer's career

Guest Contributor
06 Sep 2018
7 min read
Gaming - whether board games or games set in the virtual realm - has been a massively popular form of entertainment since time immemorial. In the pursuit of creating more sophisticated, thrilling, and intelligent games, game developers have delved into ML and AI technologies to fuel innovation in the gaming sphere. The gaming domain is the ideal experimentation bed for evolving technologies because not only do they put up complex and challenging problems for ML and AI to solve, they also pose as a ground for creativity - a meeting ground for machine learning and the art of interaction. Machine Learning and Artificial Intelligence in Gaming The reliance on AI for gaming is not a recent development. In fact, it dates back to 1949, when the famous cryptographer and mathematician Claude Shannon made his musings public about how a supercomputer could be made to master Chess. Then again, in 1952, a graduate student in the UK developed an AI that could play tic-tac-toe with ultimate perfection. Source : Medium However, it isn’t just ML and AI that are progressing through experimentations on games. Game development, too, has benefited a great deal from these pioneering technologies. AI and ML have helped enhance the gaming experience on many grounds such as gaming design, the interactive quotient, as well as the inner functionalities of games. The above mentioned AI use cases focus on two primary things: one is to impart enhanced realism in virtual gaming environment and the second is to create a more naturalistic interface between the gaming environment and the players. As of now, the focus of game developers, data scientists, and ML researchers lies in two specific categories of the gaming domain - games of perfect information and games of imperfect information. In games of perfect information, a player is aware of all the aspects of the game throughout the playing session, whereas, in games of imperfect information, players are oblivious to specific aspects of the game. When it comes to games of perfect information such as Chess and Go, AI has shown various instances of overpowering human intelligence. Back in 1997, IBM’s Deep Blue successfully defeated world Chess champion, Garry Kasparov in a six-game match. In 2016, Google’s AlphaGo emerged as the victor in a Go match scoring 4-1 after defeating South Korean Go champion, Lee Sedol. One of the most advanced chess AIs developed yet, Stockfish, uses a combination of advanced heuristics and brute force to compute numeric values for each and every move in a specific position in Chess. It also effectively eliminates bad moves using the Alpha-beta pruning search algorithm. While the progress and contribution of AI and ML to the field of games of perfect information is laudable, researchers are now intrigued by games of imperfect information. Games of imperfect information offer much more challenging situations that are essentially difficult for machines to learn and master. Thus, the next evolution in the world of gaming will be to create spontaneous gaming environment using AI technology, in which developers will build only the gaming environment and its mechanics instead of creating a game with pre-programmed/scripted plots. In such a scenario, the AI will have to confront and solve spontaneous challenges with personalized scenarios generated on the spot. Games like StarCraft and StarCraft II have stirred up massive interest among game researchers and developers. In these games, the players are only partially aware of the gaming aspects and the game is largely determined not just by the AI moves and the previous state of the game, but also by the moves of other players. Since in these games you will have little knowledge about your rival’s moves, you have to take decisions on the go and your moves have to be spontaneous. The recent win of OpenAI Five over amateur human players in Dota2 is a good case in point. OpenAI Five is a team of five neural networks that leverages an advanced version of Proximal Policy Optimization and uses a separate LSTM to learn identifiable strategies. The progress of OpenAI Five shows that even without human data, reinforcement learning can facilitate long-term planning, thus, allowing us to make further progress in the games of imperfect information. Career in Game Development With ML and AI As ML and AI continue to penetrate the gaming industry, it is creating a huge demand for talented and skilled game developers who are well-versed in these technologies. Today, game development is at a place where it’s no longer necessary to build games using time-consuming manual techniques. ML and AI have made the task of game developers easier as by leveraging these technologies, they can design and build innovative gaming environment, and test them automatically. The integration of AI and ML in the gaming domain is giving birth to new job positions like Gameplay Software Engineer (AI), Gameplay Programmer (AI), and Game Security Data Scientist, to name a few. The salaries of traditional game developers is in stark contrast with that of those having AI/ML skills. While the average salary of game developers is usually around $44,000, it can scale up to and over $1,20,000 if one possesses AI/ML skills. Gameplay Engineer Average salary - $73,000 - $1,16,000 Gameplay engineers are usually part of the core game dev team and are entrusted with the responsibility of enhancing the existing gameplay systems to enrich the player experience. Companies today demand for gameplay engineers who are proficient in C/C++ and well-versed with AI/ML technologies. Gameplay Programmer Average salary - $98,000 - $1,49,000 Gameplay programmers work in close collaboration with the production and design team to develop cutting edge features in the existing and upcoming gameplay systems. Programming skills are a must and knowledge of AI/ML technologies is an added bonus. Game Security Data Scientist Average salary - $73,000 - $1,06,000 The role of a gameplay security data scientist is to combine both security and data science approaches to detect anomalies and fraudulent behavior in games. This calls for a high degree of expertise in AI, ML, and other statistical methods. With impressive salaries and exciting job opportunities cropping up fast in the game development sphere, the industry is attracting some major talent towards it. Game developers and software developers around the world are choosing the field due to the promises of rapid career growth. If you wish to bag better and more challenging roles in the domain of game development, you should definitely try and upskill your talent and knowledge base by mastering the fields of ML and AI. Packt Publishing is the leading UK provider of Technology eBooks, Coding eBooks, Videos and Blogs; helping IT professionals to put software to work. It offers several books and videos on Game development with AI and machine learning. It’s never too late to learn new disciplines and expand your knowledge base. There are numerous online platforms that offer great artificial intelligent courses. The perk of learning from a registered online platform is that you can learn and grow at your own pace and according to your convenience. So, enroll yourself in one and spice up your career in game development! About Author: Abhinav Rai is the Data Analyst at UpGrad, an online education platform providing industry oriented programs in collaboration with world-class institutes, some of which are MICA, IIIT Bangalore, BITS and various industry leaders which include MakeMyTrip, Ola, Flipkart etc.   Best game engines for AI game development Implementing Unity game engine and assets for 2D game development [Tutorial] How to use arrays, lists, and dictionaries in Unity for 3D game development      
Read more
  • 0
  • 0
  • 6312

article-image-new-cybersecurity-threats-posed-by-artificial-intelligence
Savia Lobo
05 Sep 2018
6 min read
Save for later

New cybersecurity threats posed by artificial intelligence

Savia Lobo
05 Sep 2018
6 min read
In 2017, the cybersecurity firm Darktrace reported a novel attack that used machine learning to observe and learn normal user behavior patterns inside a network. The malignant software began to mimic normal behavior thus blending it into the background and become difficult for security tools to spot. Many organizations are exploring the use of AI and machine learning to secure their systems against malware or cyber attacks. However, given their nature for self-learning, these AI systems have now reached a level where they can be trained to be a threat to systems i.e., go on the offensive. This brings us to a point where we should be aware of different threats that AI poses on cybersecurity and how we should be careful while dealing with it. What cybersecurity threats does AI pose? Hackers use AI as an effective weapon to intrude into organizations AI not only helps in defending against cyber attacks but can also facilitate cyber attacks. These AI-powered attacks can even bypass traditional means of countering attacks. Steve Grobman, chief technology officer at McAfee said, “AI, unfortunately, gives attackers the tools to get a much greater return on their investment.” A simple example where hackers are using AI to launch an attack is via spear phishing. AI systems with the help of machine learning models can easily mimic humans by crafting convincing fake messages. Using this art, hackers can use them to carry out increased phish attacks. Attackers can also use AI to create a malware for fooling sandboxes or programs that try to spot rogue code before it is deployed in companies' systems Machine learning poisoning Attackers can learn how the machine learning workflow processes function and once they spot any vulnerability, they can try to confuse these ML models. This is known as Machine learning poisoning. This process is simple. The attacker just needs to poison the data pool from which the algorithm is learning. Till date, we have trusted CNNs in areas such as image recognition and classification. Autonomous vehicles too use CNNs to interpret the street designs. The CNNs depend on training resources (which can come from cloud or third parties) to effectively function. Attackers can poison these sources by setting up backdoor images or via a man-in-the-middle attack where the attacker intercepts the data sent to the Cloud GPU service. Such cyber attacks are difficult to detect and can evade into the standard validation testing. Bot cyber-criminals We enjoy talking to chatbots without even realizing how much we are sharing with them. Also, chatbots can be programmed to keep up conversations with users in a way to sway them into revealing their personal or financial info, attachments and so on. A Facebook bot, in 2016, represented itself as a friend and tricked 10,000 Facebook users into installing a malware. Once the malware was compromised, it hijacked the victims’ Facebook account. AI-enabled botnets can exhaust human resources via online portals and phone support. Most of us using AI conversational bots such as Google Assistant or Amazon’s Alexa do not realize how much they know about us. Being an IoT driven tech, they have the ability to always listen, even the private conversations happening around them. Moreover, some chatbots are ill-equipped for secure data transmissions such as HTTPS protocols or Transport Level Authentication (TLA) and can be easily used by cybercriminals. Cybersecurity in the age of AI attacks As machine driven cyber threats are ever evolving, policymakers should closely work with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI. Conducting deliberate red team exercises in the AI/cybersecurity domain similar to the DARPA Cyber Grand Challenge but across a wider range of attacks (e.g. including social engineering, and vulnerability exploitation beyond memory attacks). This will help to better understand the skill levels required to carry out certain attacks and defenses and to understand how well they work in practice. Disclosing AI zero-day vulnerabilities: These software vulnerabilities are the ones that have not been made publicly known (and thus defenders have zero days to prepare for an attack making use of them). It is good to disclose these vulnerabilities to affected parties before publishing widely about them, in order to provide an opportunity for a patch to be developed. Testing security tools: Software development and deployment tools have evolved to include an increasing array of security-related capabilities (testing, fuzzing, anomaly detection, etc.). Researchers can envision tools to test and improve the security of AI components and systems integrated with AI components during development and deployment so that they are less amenable to attack. Use of central access licensing model: This model has been adopted in the industry for AI-based services such as sentiment analysis and image recognition. It can also place limits on the malicious use of the underlying AI technologies. For instance, it can impose limitations on the speed of use, and prevent some large-scale harmful applications. It also contains certain terms and conditions that can explicitly prohibit the malicious use, thus allowing clear legal recourse. Using Deep Machine learning systems to detect patterns of abnormal activity. By using these patterns, AI and Machine learning can be trained to track information and deliver predictive analysis. Self- learning AI systems or reinforcement learning systems can be used to learn the behavioral pattern of the opponent AI systems and adapt themselves in a way to combat malicious intrusion. Transfer learning can be applied to any new AI system which is to be trained to defend against AI. Here, the system can be used to detect novel cyber attacks by training it on the knowledge or data obtained from other labelled and unlabelled data sets, which contain different types of attacks and feed the representation to a supervised classifier. Conclusion AI is being used by hackers on a large scale and can soon turn unstoppable given its potential for finding patterns, a key to finding systemic vulnerabilities. Cybersecurity is such a domain where the availability of data is vast; be it personal, financial, or public data, all of which is easily accessible. Hackers find ways and means to obtain this information secretly. This threat can quickly escalate as an advanced AI can easily educate itself, learn the ways adopted by hackers and can, in turn, come back with a much devastating way of hacking. Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology 6 artificial intelligence cybersecurity tools you need to know Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy  
Read more
  • 0
  • 0
  • 8169

article-image-how-to-beat-cyber-interference-in-an-election-process
Guest Contributor
05 Sep 2018
6 min read
Save for later

How to beat Cyber Interference in an Election process

Guest Contributor
05 Sep 2018
6 min read
The battle for political influence and power is transcending all boundaries and borders. There are many interests at stake, and some parties, organizations, and groups are willing to pull out the “big guns” in order to get what they want. “Hacktivists” are gaining steam and prominence these days. However, governmental surveillance and even criminal (or, at the very least, morally questionable) activity can happen, too, and when it does, the scandal rises to the most relevant headlines in the world’s most influential papers. That was the case in the United States’ presidential election of 2016 and in France’s most recent process. Speaking of the former, the Congress and the Department of Investigations revealed horrifying details about Russian espionage activity in the heat of the battle between Democrat Hillary Clinton and Republican Donald Trump, who ended up taking the honors. As for the latter, the French had better luck in their quest to prevent the Russians to wreak havoc in the digital world. In fact, it wasn’t luck: it was due diligence, a sense of responsibility, and a clever way of using past experiences (such as what happened to the Americans) to learn and adjust. Russia’s objective was to influence the outcome of the process by publishing top secret and compromising conversations between high ranked officials. In their attempt to intervene the American elections, they managed to get in networks and systems controlled by the state to publish fake news, buy Facebook ads, and employ bots to spread the fake news pieces. How to stop cyber interference during elections Everything should start with awareness about how to avoid hacking attacks, as well as a smoother communication and integration between security layers. Since the foundation of it all is the law, each country needs to continually make upgrades to have all systems ready to avoid and fight cyber interference in the election and in all facets of life. Diplomatic relationships need to understand just how far a nation state can go in the case of defending their sovereignty against such crimes. Pundits and experts in the matter state that until the system is hacking-proof and can offer reliability, every state needs to gather and count hand votes as a backup to digital votes. Regarding this, some advocates recently told the Congress that the United States should implement paper ballots that are prepared to provide physical evidence of every vote, effectively replacing the unreliable and vulnerable machines currently used. According to J. Alex Halderman, who is a computer science teacher, this ballot might look “low tech” to the average eye, but they represent a “reliable and cost-effective defense.” Paying due attention to every detail Government authorities need to pay better attention to propaganda (especially Russian propaganda), because it may show patterns about the nation’s intentions. By now, we all know what the Russians are capable of, and figuring out their intentions would go a long way in helping the country prepare to future attacks in a better way. The American government may also require Russian media and social platforms to register under the FARA, which is the Foreign Agents Registration Act. That way, there will be a more efficient database about who is a foreign agent of influence. One of the most critical corrective measures to be taken in the future is prohibiting the chance of buying advertising that directly influences the outcome of certain processes and elections. Handing diplomatic sanctions just isn’t enough Lately, the US Congress, approved by president Trump, has been handing sanctions to people involved in the 2016 cyber attack. However, a far more effective measure to take would be enhancing cyber defense, because it can offer immediate detection of threats and is well-equipped to bring to an end any network intrusions. According to scientist Thomas Schelling, the fear of the consequences of any given situation can be a powerful motivator, but it can be difficult to deter individuals or organizations that can’t be easily tracked and identified, and act behind irrational national ideologies and political goals. Instead, adopting cyber defense can stop any intrusion in time and offer more efficient punishments. Active defense is legally viable and a very capable solution because it can disrupt the perpetrators outside networks. Enabling the “hack back” approach can allow countries to take justice into their own hands in case of any cyber attack attempt. The next step would be working on lowering the required threshold to enable this kind of response. Cyber defense is the way to go Cyber defense measures can be very versatile and have proven effectiveness. Take the example of France: in the most recent elections, French intelligence watched Russian cyber activity for the duration of the election campaign of Emmanuel Macron. Some strategies include letting the hackers steal fake files and documents, misleading them and making them waste their time. The cyber defense can also ensure to embed beacons that can disclose the attackers’ current location or mess with their networks. There is even a possibility of erasing stolen information. In the case of France, cyber defense specialists were one step ahead of the Russians: they made false email accounts and introduced numerous fake documents and files that discouraged the Russians. Known systems, networks, and platforms The automated capabilities of cyber defense can trump any malicious attempt or digital threat. For example, the LightCyber Magna platform can perceive big amounts of information. Such a system may have been able to stop Russian hackers from installing malware on the DMC (Democratic National Committee). Another cyber defense tool, the Palo Alto Network Traps, are known to block malware as strong as the WannaCry ransomware attack that encrypted more than 200,000 computers in almost a hundred countries. Numerous people lost their data or had to pay thousands of dollars to recover it. VPN: an efficient cybersecurity tool Another perfectly usable cyber defense tools are Virtual Private Networks. VPNs such as Surfshark can encrypt all traffic shared online, as well as the user’s IP address. They effectively provide anonymous browsing as well as privacy. Cyber defense isn’t just a luxury that just a handful of countries can afford: it is a necessity as a tool that helps combat cyber interference not only in elections but in every facet of life and international relationships. Author Bio Harold is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online. Top 5 cybersecurity myths debunked Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 2622
article-image-a-non-programmers-guide-to-learning-machine-learning
Natasha Mathur
05 Sep 2018
11 min read
Save for later

A non programmer’s guide to learning Machine learning

Natasha Mathur
05 Sep 2018
11 min read
Artificial intelligence might seem intimidating, but it isn’t actually as complex as you might think. Many of the tools that have been developed over the last decade or so have all helped to make artificial intelligence and machine learning more accessible to engineers with varying degrees of experience and knowledge. Today, we’ve got to a stage where it’s now accessible even to people who have barely written a line of code in their life! Pretty exciting, right? But if you’re completely new to the field, it can be challenging to know how to get started - fortunately, we’re about to help you overcome that first hurdle. If you are an AI denier, then be sure to first read ‘why learn Machine Learning as a non-techie’ before you move forward. A strong purpose and belief is the first step to learning anything new. Alright, now here’s how you can get started with artificial intelligence and machine learning techniques quickly. 0. Use a free MLaaS or a no code interactive machine learning tool to experience first hand what is possible with learning machine learning: Some popular examples of no code machine learning as a service option are Microsoft Azure, BigML, Orange, and Amazon ML. Read Q2 under the FAQ section below to know more on this topic. 1. Learn Linear Algebra: Linear Algebra is the elementary unit for ML. It helps you effectively comprehend the theory behind the Machine learning algorithms and how they work. It also improves your math skills such as statistics, programming skills, which are all other skills that helps in ML. Learning Resources: Linear Algebra for Beginners: Open Doors to Great Careers Linear algebra Basics 2. Learn just enough Python or any programming: Now, you can get started with any language of your interest, but we suggest Python as  it’s great for people who are new to programming. It’s easy to learn due to its simple syntax. You’ll be able to quickly implement the ML algorithms. Also,  It has a rich development ecosystem that offers a ton of libraries and frameworks in Machine Learning such as Scikit Learn, Lasagne, Numpy, Scipy, Theano, Tensorflow, etc. Learning Resources: Python Machine Learning Learn Python in 7 Days Python for Beginners 2017 [Video] Learn Python with codecademy Python editor for beginner programmers 3. Learn basic Probability Theory and statistics: A lot of fundamental Statistical and Probability Theories form the basis for ML. You’ve probably already learned Probability and statistics in school, it easy to dive into advanced statistics for ML. Machine learning in its currently widely used form is a way to predict odds and see patterns. Knowing statistics and probability is important as it will help you with better understanding of why any machine learning algorithm works. For example, your grounding in this area, will help to ask the right questions, choose the right set of algorithms and know what to expect as answers from your ML model on questions such as: What are the odds of this person also liking this movie given their current movie watching choices ( Collaborative filtering and content-based filtering) How similar is this user to that group of users who brought a bunch of stuff on my site (clustering, collaborative filtering, and classification) Could this person be at risk of cancer given a certain set of traits and health indicator observations (logistic regression) Should you buy that stock (decision tree) Also, check out our interview with James D. Miller to know more about why learning stats is important in this field. Learning resources: Statistics for Data Science [Video] 4. Learn machine learning algorithms: Do not get intimidated!  You don’t have to be an expert to learn ML algorithms. Knowing basic ML algorithms that are majorly used in the real world applications like linear regression, naive Bayes, and decision trees, are enough to get you started. Learn what they do and how they are used in Machine Learning. 5. Learn numpy sci-kit learn,Keras or any other popular machine learning framework: It can be confusing initially to decide which framework to learn. Each one has its own advantages and disadvantages. Numpy is a linear algebra library which is useful for performing mathematical and logical operations. You can easily work with large multidimensional arrays using Numpy. Sci-kit learn helps with quick implementation of popular algorithms on datasets as just one line of code makes different algorithms available for you. Keras is minimalistic and straightforward with high-levels of extensibility, so it is easier to approach. Learning Resources:  Hands-on Machine Learning with TensorFlow [Video]  Hands-on Scikit-learn for Machine Learning [Video] If you have reached till here, it is time to put your learning into practice. Go ahead and create a simple linear regression model using some publicly available dataset in your area of interest. Kaggle, ourworldindata.org, UC Irvine Machine Learning repository, elitedatascience, all have a rich set of clean datasets in varied fields. Now, it is necessary to commit and put in daily efforts to practise these skills. Quora, Reddit, Medium, and stackoverflow will be your best friends when it comes to solving doubts regarding any of these skills. Data Helpers is another great resource that provides newcomers with help on queries regarding entering the ML field and related topics. Additionally, once you start getting hang of these skills, identify your strengths and interests, to realign your career goals. Research on the kind of work you want to put your newly gained Machine Learning skill to use. It needn’t be professional or serious, it just needs to be something that you deeply care about or are passionate about. This will pull you through your learning milestones, should you feel low at some point. Also, don’t forget to collaborate with other people and learn from them. You can work with web developers, software programmers, data analysts, data administrators, game developers etc. Finally, keep yourself updated with all the latest happenings in the ML world. Follow top experts and influencers on social media, top blogs on Machine Learning, and conferences. Once you are done checking off these steps off your list, you’ll be ready to start off with your ML project.                                                  Now, we’ll be looking at the most frequently asked questions by beginners in the field of Machine learning. Frequently asked questions by Beginners in ML As a beginner, it’s natural to have a lot of questions regarding ML. We’ll be addressing the top three frequently asked questions by beginners or non-programmers when it comes to Machine learning: Q.1 I am looking to make a career in Machine learning but I have no prior programming experience. Do I need to know programming for Machine learning? In a nutshell, Yes. If you want a career in Machine learning then having some form of programming knowledge really helps. As mentioned earlier in this article, learning a programming language can really help you with implementing ML algorithms. It also lets you know the internal mechanism behind Machine learning. So, having programming as a prior skill is great. Again, as mentioned before, you can get started with Python which is the easiest and the most common languages for ML. However, programming is just a part of Machine learning. For instance, “machine learning engineers” typically write more code than develop models, while “research scientists” work more on modelling and analyzing different models. Now, ML is based on the principles of statistical inference and for talking statistically to the computer, we need a language, there comes Coding. So, even though the nature of your job in ML might not require you to code as much, there’s still some amount of coding required. Read Also: Why is Python so good for AI and ML? 5 Python Experts Explain Top languages for Artificial Intelligence development Q.2 Are there any tools that can help me with Machine learning without touching a single line of code? Yes. With the rise of MLaaS (Machine learning as a service), there are certain tools that help you get started with machine learning right-away. These are especially useful for business applications of ML, such as predictive modelling and clustering. Read Also: How MLaaS is transforming cloud Some of the most popular ones are: BigML:  This cloud based web-service lets you upload your data, prepare it and run algorithms on it. It’s great for people with not so extensive data science backgrounds. It offers a clean and easy to use interfaces for configuring algorithms (decision trees) and reviewing the results. Being focused “only” on Machine Learning, it comes with a wide set of features, all well integrated within a usable Web UI. Other than that, it also offers an API so that if you like it you can build an application around it. Microsoft Azure: The Microsoft Azure ML studio is a “GUI-based integrated development environment for constructing and operationalizing Machine Learning workflow on Azure”. So, via an integrated development environment called ML Studio, people without data science background or non-programmers can also build data models with the help of drag-and-drop gestures and simple data flow diagrams. This also saves a lot of time through ML Studio's library of sample experiments. Learning resources: Microsoft Azure Machine Learning Machine Learning In The Cloud With Azure ML[Video] Orange: This is an open source machine learning and data visualization studio for novice and experts alike. It provides a toolbox comprising of text mining (topic modelling) and image recognition. It also offers a design tool for visual programming which allows you to connect together data preparation, algorithms, and result evaluation, thereby, creating machine learning “programs”. Apart from that, it provides over 100 widgets for the environment and there’s also a Python API and library available which you can integrate into your application. Amazon ML: Amazon ML is a part of Amazon Web Services ( AWS ) that combines powerful machine learning algorithms with interactive visual tools to guide you towards easily creating, evaluating, and deploying machine learning models. So, whether you are a data scientist or a newbie, it offers ML services and tools tailored to meet your needs and level of expertise. Building ML models using Amazon ML consists of three operations: data analysis, model training, and evaluation. Learning Resources: Effective Amazon Machine Learning Q.3  Do I need to know advanced mathematics ( college graduate level ) to learn Machine learning? It depends. As mentioned earlier, understanding of the following mathematical topics: Probability, Statistics and Linear Algebra can really make your machine learning journey easier and also help simplify your code. These help you understand the “why” behind the working of the machine learning algorithms, which is quite fundamental to understanding ML. However, not knowing advanced mathematics is not an excuse to not learning Machine Learning. There a lot of libraries which makes the task of applying an ML algorithm to solve a task easier. One such example is the widely used Python’s scikit-learn library. With scikit-learn, you just need one line of code and you’ll have the most common algorithms there for you, ready to be used. But, if you want to go deeper into machine learning then knowing advanced mathematics is a prerequisite as it will help you understand the algorithms, the formulas, how the learning is done and many other Machine Learning concepts. Also, with so many courses and tutorials online, you can always learn advanced mathematics on the side while exploring Machine learning. So, we looked at the three most asked questions by beginners in the field of Machine Learning. In the past, machine learning has provided us with self-driving cars, effective web search, speech recognition, etc. Machine learning is extremely pervasive, in fact, many researchers believe that ML is the best way to make progress towards human-level AI. Learning ML is not an easy task but its not next to impossible either. In the end, it all depends on the amount of dedication and efforts that you’re willing to put in to get a grasp of it. We just touched the tip of the iceberg in this article, there’s a lot more to know in Machine Learning which you will get a hang of as you get your feet dirty in it. That being said, all the best for the road ahead! Facebook launches a 6-part ML video series 7 of the best ML conferences for the rest of 2018 Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 16902

article-image-5-ways-artificial-intelligence-is-upgrading-software-engineering
Melisha Dsouza
02 Sep 2018
8 min read
Save for later

5 ways artificial intelligence is upgrading software engineering

Melisha Dsouza
02 Sep 2018
8 min read
47% of digitally mature organizations, or those that have advanced digital practices, said they have a defined AI strategy (Source: Adobe). It is estimated that  AI-enabled tools alone will generate $2.9 trillion in business value by 2021.  80% of enterprises are smartly investing in AI. The stats speak for themselves. AI clearly follows the motto “go big or go home”. This explosive growth of AI in different sectors of technology is also beginning to show its colors in software development. Shawn Drost, co-founder and lead instructor of coding boot camp ‘Hack Reactor’ says that AI still has a long way to go and is only impacting the workflow of a small portion of software engineers on a minority of projects right now. AI promises to change how organizations will conduct business and to make applications smarter. It is only logical then that software development, i.e., the way we build apps, will be impacted by AI as well. Forrester Research recently surveyed 25 application development and delivery (AD&D) teams, and respondents said AI will improve planning, development and especially testing. We can expect better software created under traditional environments. 5 areas of Software Engineering AI will transform The 5 major spheres of software development-  Software design, Software testing, GUI testing, strategic decision making, and automated code generation- are all areas where AI can help. A majority of interest in applying AI to software development is already seen in automated testing and bug detection tools. Next in line are the software design precepts, decision-making strategies, and finally automating software deployment pipelines. Let's take an in-depth look into the areas of high and medium interest of software engineering impacted by AI according to the Forrester Research report.     Source: Forbes.com #1 Software design In software engineering, planning a project and designing it from scratch need designers to apply their specialized learning and experience to come up with alternative solutions before settling on a definite solution. A designer begins with a vision of the solution, and after that retracts and forwards investigating plan changes until they reach the desired solution. Settling on the correct plan choices for each stage is a tedious and mistake-prone action for designers. Along this line, a few AI developments have demonstrated the advantages of enhancing traditional methods with intelligent specialists. The catch here is that the operator behaves like an individual partner to the client. This associate should have the capacity to offer opportune direction on the most proficient method to do design projects. For instance, take the example of AIDA- The Artificial Intelligence Design Assistant, deployed by Bookmark (a website building platform). Using AI, AIDA understands a users needs and desires and uses this knowledge to create an appropriate website for the user. It makes selections from millions of combinations to create a website style, focus, image and more that are customized for the user. In about 2 minutes, AIDA designs the first version of the website, and from that point it becomes a drag and drop operation. You can get a detailed overview of this tool on designshack. #2 Software testing Applications interact with each other through countless  APIs. They leverage legacy systems and grow in complexity everyday. Increase in complexity also leads to its fair share of challenges that can be overcome by machine-based intelligence. AI tools can be used to create test information, explore information authenticity, advancement and examination of the scope and also for test management. Artificial intelligence, trained right, can ensure the testing performed is error free. Testers freed from repetitive manual tests thus have more time to create new automated software tests with sophisticated features. Also, if software tests are repeated every time source code is modified, repeating those tests can be not only time-consuming but extremely costly. AI comes to the rescue once again by automating the testing for you! With AI automated testing, one can increase the overall scope of tests leading to an overall improvement of software quality. Take, for instance, the Functionize tool. It enables users to test fast and release faster with AI enabled cloud testing. The users just have to type a test plan in English and it will be automatically get converted into a functional test case. The tool allows one to elastically scale functional, load, and performance tests across every browser and device in the cloud. It also includes Self-healing tests that update autonomously in real-time. SapFix is another AI Hybrid tool deployed by Facebook which can automatically generate fixes for specific bugs identified by 'Sapienz'. It then proposes these fixes to engineers for approval and deployment to production.   #3 GUI testing Graphical User Interfaces (GUI) have become important in interacting with today's software. They are increasingly being used in critical systems and testing them is necessary to avert failures. With very few tools and techniques available to aid in the testing process, testing GUIs is difficult. Currently used GUI testing methods are ad hoc. They require the test designer to perform humongous tasks like manually developing test cases, identifying the conditions to check during test execution, determining when to check these conditions, and finally evaluate whether the GUI software is adequately tested. Phew! Now that is a lot of work. Also, not forgetting that if the GUI is modified after being tested, the test designer must change the test suite and perform re-testing. As a result, GUI testing today is resource intensive and it is difficult to determine if the testing is adequate. Applitools is a GUI tester tool empowered by AI. The Applitools Eyes SDK automatically tests whether visual code is functioning properly or not. Applitools enables users to test their visual code just as thoroughly as their functional UI code to ensure that the visual look of the application is as you expect it to be. Users can test how their application looks in multiple screen layouts to ensure that they all fit the design. It allows users to keep track of both the web page behaviour, as well as the look of the webpage. Users can test everything they develop from the functional behavior of their application to its visual look. #4 Using Artificial Intelligence in Strategic Decision-Making Normally, developers have to go through a long process to decide what features to include in a product. However, machine learning AI solution trained on business factors and past development projects can analyze the performance of existing applications and help both teams of engineers and business stakeholders like project managers to find solutions to maximize impact and cut risk. Normally, the transformation of business requirements into technology specifications requires a significant timeline for planning. Machine learning can help software development companies to speed up the process, deliver the product in lesser time, and increase revenue within a short span. AI canvas is a well known tool for Strategic Decision making.The canvas helps identify the key questions and feasibility challenges associated with building and deploying machine learning models in the enterprise. The AI Canvas is a simple tool that helps enterprises organize what they need to know into seven categories, namely- Prediction, Judgement, Action, Outcome, Input, Training and feedback. Clarifying these seven factors for each critical decision throughout the organization will help in identifying opportunities for AIs to either reduce costs or enhance performance.   #5 Automatic Code generation/Intelligent Programming Assistants Coding a huge project from scratch is often labour intensive and time consuming. An Intelligent AI programming assistant will reduce the workload by a great extent. To combat the issues of time and money constraints, researchers have tried to build systems that can write code before, but the problem is that these methods aren’t that good with ambiguity. Hence, a lot of details are needed about what the target program aims at doing, and writing down these details can be as much work as just writing the code. With AI, the story can be flipped. ”‘Bayou’- an A.I. based application is an Intelligent programming assistant. It began as an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com. Bayou follows a method called neural sketch learning. It trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associates this sketch with the “intent” that lies behind the program. This DARPA initiative aims at making programming easier and less error prone. Sounds intriguing? Now that you know how this tool works, why not try it for yourself on i-programmer.info. Summing it all up Software engineering has seen massive transformation over the past few years. AI and software intelligence tools aim to make software development easier and more reliable. According to a Forrester Research report on AI's impact on software development, automated testing and bug detection tools use AI the most to improve software development. It will be interesting to see the future developments in software engineering empowered with AI. I’m expecting faster, more efficient, more effective, and less costly software development cycles while engineers and other development personnel focus on bettering their skills to make advanced use of AI in their processes. Implementing Software Engineering Best Practices and Techniques with Apache Maven Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce
Read more
  • 0
  • 0
  • 17856