Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Data

281 Articles
article-image-what-the-eu-copyright-directive-means-for-developers-and-what-you-can-do
Richard Gall
11 Sep 2018
6 min read
Save for later

What the EU Copyright Directive means for developers - and what you can do

Richard Gall
11 Sep 2018
6 min read
Tomorrow, on Wednesday 12 September, the European Parliament will vote on amendments to the EU Copyright Bill, first proposed back in September 2016. This bill could have a huge impact on open source, software engineering, and even the future of the internet. Back in July, MEPs voted down a digital copyright bill that was incredibly restrictive. It asserted the rights of large media organizations to tightly control links to their stories, copyright filters on user generated content. https://twitter.com/EFF/status/1014815462155153408 The vote tomorrow is an opportunity to amend aspects of the directive - that means many of the elements that were rejected in July could still find their way through. What parts of the EU copyright directive are most important for software developers? There are some positive aspects of the directive. To a certain extent, it could be seen as evidence of the European Union continuing a broader project to protect citizens by updating digital legislation - a move that GDPR began back in May 2018. However, there are many unintended consequences of the legislation. It's unclear whether the negative impact is down to any level of malicious intent from law makers, or is simply reflective of a significant level of ignorance about how the web and software works. There are 3 articles within the directive that developers need to pay particular attention to. Article 13 of the EU copyright directive: copyright filters Article 13 of the directive has perhaps had the most attention. Essentially, it will require "information society service providers" - user-generated information and content platforms - to use "recognition technologies" to protect against copyright infringement. This could have a severe impact on sites like GitHub, and by extension, the very philosophy of open collaboration and sharing on which they're built. It's for this reason that GitHub has played a big part in educating Brussels law makers about the possible consequences of the legislation. Last week, the platform hosted an event to discuss what can be done about tomorrow's vote. In it, Marten Mickos, CEO of cybersecurity company Hacker One gave a keynote speech, saying that "Article 13 is just crap. It will benefit nobody but the richest, the wealthiest, the biggest - those that can spend tens of millions or hundreds of millions on building some amazing filters that will somehow know whether something is copyrighted or not." https://youtu.be/Sm_p3sf9kq4 A number MEPs in Brussels have, fortunately, proposed changes that would exclude software development platforms to instead focus the legislation on sites where users upload music and video. However, for those that believe strongly in an open internet, even these amendments could be a small compromise that not only places an unnecessary burden on small sites that simply couldn't build functional copyright filters, but also opens a door to censorship online. A better alternative could be to ditch copyright filters and instead opt for licensing agreements instead. This is something put forward by German politician Julia Reda - if you're interested in policy amendments you can read them in detail here. [caption id="attachment_22485" align="alignright" width="300"] Image via commons.wikimedia.org[/caption] Julia Reda is a member of the Pirate Party in Germany - she's a vocal advocate of internet freedoms and an important voice in the fight against many of the directive (she wants the directive to be dropped in its entirety). She's put together a complete list of amendments and alternatives here. Article 11 of the EU Copyright Directive: the "link tax" Article 11 follows the same spirit of article 13 of the bill. It gives large press organizations more control over how their content is shared and linked to online. It has been called the "link tax" - it could mean that you would need a license to link to content. According to news sites, this law would allow them to charge internet giants like Facebook and Google that link to their content. As Cory Doctorow points out in an article written for Motherboard in June, only smaller platforms would lose out - the likes of Facebook and Google could easily manage the cost. But there are other problems with article 11. It could, not only, as Doctorow also writes, "crush scholarly and encyclopedic projects like Wikipedia that only publish material that can be freely shared," but it could also "inhibit political discussions". This is because the 'link tax' will essentially allow large media organizations to fully control how and where their content is shared. "Links are facts" Doctorow argues, meaning that links are a vital component within public discourse, which allows the public to know who thinks what, and who said what. Article 3 of the EU Copyright Directive: restrictions on data mining Article 3 of the directive hasn't received as much attention as the two above, but it does nevertheless have important implications for the data mining and analytics landscape. Essentially, this proportion of the directive was originally aimed at posing restrictions on the data that can be mined for insights except in specific cases of scientific research. This was rejected by MEPs. However, it is still an area of fierce debate. Those that oppose it argue that restrictions on text and data mining could seriously hamper innovation and hold back many startups for whom data is central to the way they operate. However, given the relative success of GDPR in restoring some level of integrity to data (from a citizen's perspective), there are aspects of this article that might be worth building on as a basis for a compromise. With trust in a tech world at an all time low, this could be a stepping stone to a more transparent and harmonious digital domain. An open internet is worth fighting for - we all depend on it The difficulty unpicking the directive is that it's not immediately clear who its defending. On the one hand, EU legislators will see this as something that defends citizens from everything that they think is wrong with the digital world (and, let's be honest, there are things that are wrong with it). Equally, those organizations lobbying for the change will, as already mentioned, want to present this as a chance to knock back tech corporations that have had it easy for too long. Ultimately, though, the intention doesn't really matter. What really matters are the consequences of this legislation, which could well be catastrophic. The important thing is that the conversation isn't owned by well-intentioned law makers that don't really understand what's at stake, or media conglomerates with their own interests in protecting their content from the perceived 'excesses' of a digital world whose creativity is mistaken for hostility. If you're an EU citizen, get in touch with your MEP today. Visit saveyourinternet.eu to help the campaign. Read next German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law
Read more
  • 0
  • 0
  • 3862

article-image-most-commonly-used-java-machine-learning-libraries
Fatema Patrawala
10 Sep 2018
15 min read
Save for later

6 most commonly used Java Machine learning libraries

Fatema Patrawala
10 Sep 2018
15 min read
There are over 70 Java-based open source machine learning projects listed on the MLOSS.org website and probably many more unlisted projects live at university servers, GitHub, or Bitbucket. In this article, we will review the major machine learning libraries and platforms in Java, the kind of problems they can solve, the algorithms they support, and the kind of data they can work with. This article is an excerpt taken from Machine learning in Java, written by Bostjan Kaluza and published by Packt Publishing Ltd. Weka Weka, which is short for Waikato Environment for Knowledge Analysis, is a machine learning library developed at the University of Waikato, New Zealand, and is probably the most well-known Java library. It is a general-purpose library that is able to solve a wide variety of machine learning tasks, such as classification, regression, and clustering. It features a rich graphical user interface, command-line interface, and Java API. You can check out Weka at http://www.cs.waikato.ac.nz/ml/weka/. At the time of writing this book, Weka contains 267 algorithms in total: data pre-processing (82), attribute selection (33), classification and regression (133), clustering (12), and association rules mining (7). Graphical interfaces are well-suited for exploring your data, while Java API allows you to develop new machine learning schemes and use the algorithms in your applications. Weka is distributed under GNU General Public License (GNU GPL), which means that you can copy, distribute, and modify it as long as you track changes in source files and keep it under GNU GPL. You can even distribute it commercially, but you must disclose the source code or obtain a commercial license. In addition to several supported file formats, Weka features its own default data format, ARFF, to describe data by attribute-data pairs. It consists of two parts. The first part contains header, which specifies all the attributes (that is, features) and their type; for instance, nominal, numeric, date, and string. The second part contains data, where each line corresponds to an instance. The last attribute in the header is implicitly considered as the target variable, missing data are marked with a question mark. For example, the Bob instance written in an ARFF file format would be as follows: @RELATION person_dataset @ATTRIBUTE `Name`  STRING @ATTRIBUTE `Height`  NUMERIC @ATTRIBUTE `Eye color`{blue, brown, green} @ATTRIBUTE `Hobbies`  STRING @DATA 'Bob', 185.0, blue, 'climbing, sky diving' 'Anna', 163.0, brown, 'reading' 'Jane', 168.0, ?, ? The file consists of three sections. The first section starts with the @RELATION <String> keyword, specifying the dataset name. The next section starts with the @ATTRIBUTE keyword, followed by the attribute name and type. The available types are STRING, NUMERIC, DATE, and a set of categorical values. The last attribute is implicitly assumed to be the target variable that we want to predict. The last section starts with the @DATA keyword, followed by one instance per line. Instance values are separated by comma and must follow the same order as attributes in the second section. Weka's Java API is organized in the following top-level packages: weka.associations: These are data structures and algorithms for association rules learning, including Apriori, predictive apriori, FilteredAssociator, FP-Growth, Generalized Sequential Patterns (GSP), Hotspot, and Tertius. weka.classifiers: These are supervised learning algorithms, evaluators, and data structures. Thepackage is further split into the following components: weka.classifiers.bayes: This implements Bayesian methods, including naive Bayes, Bayes net, Bayesian logistic regression, and so on weka.classifiers.evaluation: These are supervised evaluation algorithms for nominal and numerical prediction, such as evaluation statistics, confusion matrix, ROC curve, and so on weka.classifiers.functions: These are regression algorithms, including linear regression, isotonic regression, Gaussian processes, support vector machine, multilayer perceptron, voted perceptron, and others weka.classifiers.lazy: These are instance-based algorithms such as k-nearest neighbors, K*, and lazy Bayesian rules weka.classifiers.meta: These are supervised learning meta-algorithms, including AdaBoost, bagging, additive regression, random committee, and so on weka.classifiers.mi: These are multiple-instance learning algorithms, such as citation k-nn, diverse density, MI AdaBoost, and others weka.classifiers.rules: These are decision tables and decision rules based on the separate-and-conquer approach, Ripper, Part, Prism, and so on weka.classifiers.trees: These are various decision trees algorithms, including ID3, C4.5, M5, functional tree, logistic tree, random forest, and so on weka.clusterers: These are clustering algorithms, including k-means, Clope, Cobweb, DBSCAN hierarchical clustering, and farthest. weka.core: These are various utility classes, data presentations, configuration files, and so on. weka.datagenerators: These are data generators for classification, regression, and clustering algorithms. weka.estimators: These are various data distribution estimators for discrete/nominal domains, conditional probability estimations, and so on. weka.experiment: These are a set of classes supporting necessary configuration, datasets, model setups, and statistics to run experiments. weka.filters: These are attribute-based and instance-based selection algorithms for both supervised and unsupervised data preprocessing. weka.gui: These are graphical interface implementing explorer, experimenter, and knowledge flowapplications. Explorer allows you to investigate dataset, algorithms, as well as their parameters, and visualize dataset with scatter plots and other visualizations. Experimenter is used to design batches of experiment, but it can only be used for classification and regression problems. Knowledge flows implements a visual drag-and-drop user interface to build data flows, for example, load data, apply filter, build classifier, and evaluate. Java-ML for machine learning Java machine learning library, or Java-ML, is a collection of machine learning algorithms with a common interface for algorithms of the same type. It only features Java API, therefore, it is primarily aimed at software engineers and programmers. Java-ML contains algorithms for data preprocessing, feature selection, classification, and clustering. In addition, it features several Weka bridges to access Weka's algorithms directly through the Java-ML API. It can be downloaded from http://java-ml.sourceforge.net; where, the latest release was in 2012 (at the time of writing this book). Java-ML is also a general-purpose machine learning library. Compared to Weka, it offers more consistent interfaces and implementations of recent algorithms that are not present in other packages, such as an extensive set of state-of-the-art similarity measures and feature-selection techniques, for example, dynamic time warping, random forest attribute evaluation, and so on. Java-ML is also available under the GNU GPL license. Java-ML supports any type of file as long as it contains one data sample per line and the features are separated by a symbol such as comma, semi-colon, and tab. The library is organized around the following top-level packages: net.sf.javaml.classification: These are classification algorithms, including naive Bayes, random forests, bagging, self-organizing maps, k-nearest neighbors, and so on net.sf.javaml.clustering: These are clustering algorithms such as k-means, self-organizing maps, spatial clustering, Cobweb, AQBC, and others net.sf.javaml.core: These are classes representing instances and datasets net.sf.javaml.distance: These are algorithms that measure instance distance and similarity, for example, Chebyshev distance, cosine distance/similarity, Euclidian distance, Jaccard distance/similarity, Mahalanobis distance, Manhattan distance, Minkowski distance, Pearson correlation coefficient, Spearman's footrule distance, dynamic time wrapping (DTW), and so on net.sf.javaml.featureselection: These are algorithms for feature evaluation, scoring, selection, and ranking, for instance, gain ratio, ReliefF, Kullback-Liebler divergence, symmetrical uncertainty, and so on net.sf.javaml.filter: These are methods for manipulating instances by filtering, removing attributes, setting classes or attribute values, and so on net.sf.javaml.matrix: This implements in-memory or file-based array net.sf.javaml.sampling: This implements sampling algorithms to select a subset of dataset net.sf.javaml.tools: These are utility methods on dataset, instance manipulation, serialization, Weka API interface, and so on net.sf.javaml.utils: These are utility methods for algorithms, for example, statistics, math methods, contingency tables, and others Apache Mahout The Apache Mahout project aims to build a scalable machine learning library. It is built atop scalable, distributed architectures, such as Hadoop, using the MapReduce paradigm, which is an approach for processing and generating large datasets with a parallel, distributed algorithm using a cluster of servers. Mahout features console interface and Java API to scalable algorithms for clustering, classification, and collaborative filtering. It is able to solve three business problems: item recommendation, for example, recommending items such as people who liked this movie also liked…; clustering, for example, of text documents into groups of topically-related documents; and classification, for example, learning which topic to assign to an unlabeled document. Mahout is distributed under a commercially-friendly Apache License, which means that you can use it as long as you keep the Apache license included and display it in your program's copyright notice. Mahout features the following libraries: org.apache.mahout.cf.taste: These are collaborative filtering algorithms based on user-based and item-based collaborative filtering and matrix factorization with ALS org.apache.mahout.classifier: These are in-memory and distributed implementations, includinglogistic regression, naive Bayes, random forest, hidden Markov models (HMM), and multilayer perceptron org.apache.mahout.clustering: These are clustering algorithms such as canopy clustering, k-means, fuzzy k-means, streaming k-means, and spectral clustering org.apache.mahout.common: These are utility methods for algorithms, including distances, MapReduce operations, iterators, and so on org.apache.mahout.driver: This implements a general-purpose driver to run main methods of other classes org.apache.mahout.ep: This is the evolutionary optimization using the recorded-step mutation org.apache.mahout.math: These are various math utility methods and implementations in Hadoop org.apache.mahout.vectorizer: These are classes for data presentation, manipulation, andMapReduce jobs Apache Spark Apache Spark, or simply Spark, is a platform for large-scale data processing builds atop Hadoop, but, in contrast to Mahout, it is not tied to the MapReduce paradigm. Instead, it uses in-memory caches to extract a working set of data, process it, and repeat the query. This is reported to be up to ten times as fast as a Mahout implementation that works directly with disk-stored data. It can be grabbed from https://spark.apache.org. There are many modules built atop Spark, for instance, GraphX for graph processing, Spark Streaming for processing real-time data streams, and MLlib for machine learning library featuring classification, regression, collaborative filtering, clustering, dimensionality reduction, and optimization. Spark's MLlib can use a Hadoop-based data source, for example, Hadoop Distributed File System (HDFS) or HBase, as well as local files. The supported data types include the following: Local vector is stored on a single machine. Dense vectors are presented as an array of double-typed values, for example, (2.0, 0.0, 1.0, 0.0); while sparse vector is presented by the size of the vector, an array of indices, and an array of values, for example, [4, (0, 2), (2.0, 1.0)]. Labeled point is used for supervised learning algorithms and consists of a local vector labeled with a double-typed class values. Label can be class index, binary outcome, or a list of multiple class indices (multiclass classification). For example, a labeled dense vector is presented as [1.0, (2.0, 0.0, 1.0, 0.0)]. Local matrix stores a dense matrix on a single machine. It is defined by matrix dimensions and a single double-array arranged in a column-major order. Distributed matrix operates on data stored in Spark's Resilient Distributed Dataset (RDD), which represents a collection of elements that can be operated on in parallel. There are three presentations: row matrix, where each row is a local vector that can be stored on a single machine, row indices are meaningless; and indexed row matrix, which is similar to row matrix, but the row indices are meaningful, that is, rows can be identified and joins can be executed; and coordinate matrix, which is used when a row cannot be stored on a single machine and the matrix is very sparse. Spark's MLlib API library provides interfaces to various learning algorithms and utilities as outlined in the following list: org.apache.spark.mllib.classification: These are binary and multiclass classification algorithms, including linear SVMs, logistic regression, decision trees, and naive Bayes org.apache.spark.mllib.clustering: These are k-means clustering org.apache.spark.mllib.linalg: These are data presentations, including dense vectors, sparse vectors, and matrices org.apache.spark.mllib.optimization: These are the various optimization algorithms used as low-level primitives in MLlib, including gradient descent, stochastic gradient descent, update schemes for distributed SGD, and limited-memory BFGS org.apache.spark.mllib.recommendation: These are model-based collaborative filtering implemented with alternating least squares matrix factorization org.apache.spark.mllib.regression: These are regression learning algorithms, such as linear least squares, decision trees, Lasso, and Ridge regression org.apache.spark.mllib.stat: These are statistical functions for samples in sparse or dense vector format to compute the mean, variance, minimum, maximum, counts, and nonzero counts org.apache.spark.mllib.tree: This implements classification and regression decision tree-learning algorithms org.apache.spark.mllib.util: These are a collection of methods to load, save, preprocess, generate, and validate the data Deeplearning4j Deeplearning4j, or DL4J, is a deep-learning library written in Java. It features a distributed as well as a single-machinedeep-learning framework that includes and supports various neural network structures such as feedforward neural networks, RBM, convolutional neural nets, deep belief networks, autoencoders, and others. DL4J can solve distinct problems, such as identifying faces, voices, spam or e-commerce fraud. Deeplearning4j is also distributed under Apache 2.0 license and can be downloaded from http://deeplearning4j.org. The library is organized as follows: org.deeplearning4j.base: These are loading classes org.deeplearning4j.berkeley: These are math utility methods org.deeplearning4j.clustering: This is the implementation of k-means clustering org.deeplearning4j.datasets: This is dataset manipulation, including import, creation, iterating, and so on org.deeplearning4j.distributions: These are utility methods for distributions org.deeplearning4j.eval: These are evaluation classes, including the confusion matrix org.deeplearning4j.exceptions: This implements exception handlers org.deeplearning4j.models: These are supervised learning algorithms, including deep belief network, stacked autoencoder, stacked denoising autoencoder, and RBM org.deeplearning4j.nn: These are the implementation of components and algorithms based on neural networks, such as neural network, multi-layer network, convolutional multi-layer network, and so on org.deeplearning4j.optimize: These are neural net optimization algorithms, including back propagation, multi-layer optimization, output layer optimization, and so on org.deeplearning4j.plot: These are various methods for rendering data org.deeplearning4j.rng: This is a random data generator org.deeplearning4j.util: These are helper and utility methods MALLET Machine Learning for Language Toolkit (MALLET), is a large library of natural language processing algorithms and utilities. It can be used in a variety of tasks such as document classification, document clustering, information extraction, and topic modeling. It features command-line interface as well as Java API for several algorithms such as naive Bayes, HMM, Latent Dirichlet topic models, logistic regression, and conditional random fields. MALLET is available under Common Public License 1.0, which means that you can even use it in commercial applications. It can be downloaded from http://mallet.cs.umass.edu. MALLET instance is represented by name, label, data, and source. However, there are two methods to import data into the MALLET format, as shown in the following list: Instance per file: Each file, that is, document, corresponds to an instance and MALLET accepts the directory name for the input. Instance per line: Each line corresponds to an instance, where the following format is assumed: the instance_name label token. Data will be a feature vector, consisting of distinct words that appear as tokens and their occurrence count. The library comprises the following packages: cc.mallet.classify: These are algorithms for training and classifying instances, including AdaBoost, bagging, C4.5, as well as other decision tree models, multivariate logistic regression, naive Bayes, and Winnow2. cc.mallet.cluster: These are unsupervised clustering algorithms, including greedy agglomerative, hill climbing, k-best, and k-means clustering. cc.mallet.extract: This implements tokenizers, document extractors, document viewers, cleaners, and so on. cc.mallet.fst: This implements sequence models, including conditional random fields, HMM, maximum entropy Markov models, and corresponding algorithms and evaluators. cc.mallet.grmm: This implements graphical models and factor graphs such as inference algorithms, learning, and testing. For example, loopy belief propagation, Gibbs sampling, and so on. cc.mallet.optimize: These are optimization algorithms for finding the maximum of a function, such as gradient ascent, limited-memory BFGS, stochastic meta ascent, and so on. cc.mallet.pipe: These are methods as pipelines to process data into MALLET instances. cc.mallet.topics: These are topics modeling algorithms, such as Latent Dirichlet allocation, four-level pachinko allocation, hierarchical PAM, DMRT, and so on. cc.mallet.types: This implements fundamental data types such as dataset, feature vector, instance, and label. cc.mallet.util: These are miscellaneous utility functions such as command-line processing, search, math, test, and so on. To design, build, and deploy your own machine learning applications by leveraging key Java machine learning libraries, check out this book Machine learning in Java, published by Packt Publishing. 5 JavaScript machine learning libraries you need to know A non programmer’s guide to learning Machine learning Why use JavaScript for machine learning?  
Read more
  • 0
  • 0
  • 20113

article-image-chatbot-toolkit-developers-design-develop-manage-conversational-ui
Bhagyashree R
10 Sep 2018
7 min read
Save for later

A chatbot toolkit for developers: design, develop, and manage conversational UI

Bhagyashree R
10 Sep 2018
7 min read
Although chatbots have been under development for at least a few decades, they did not become mainstream channels for customer engagement until recently. Due to serious efforts by industry giants like Apple, Google, Microsoft, Facebook, IBM, and Amazon, and their subsequent investments in developing toolkits, chatbots and conversational interfaces have become a serious contender to other customer contact channels. In this time, chatbots have been applied in various sectors and various conversational scenarios within sectors like retail, banking and finance, governmental, health, legal, and many more. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. This book is organized as eight chatbot projects that will introduce the ecosystem of tools, techniques, concepts, and even gadgets relating to conversational interfaces. Over the last few years, an ecosystem of tools and services has grown around the idea of conversational interfaces. There are a number of tools that we can plug and play to design, develop, and manage chatbots. Mockup tools Mockups can be used to show clients as to how a chatbot would look and behave. These are tools that you may want to consider using during conversation design, after coming up with sample conversations between the user and the bot. Mockup tools allow you to visualize the conversation between the user and the bot and showcase the dynamics of conversational turn-taking. Some of these tools allow you to export the mockup design and make videos. BotSociety.io and BotMock.com are some of the popular mockup tools. Channels in Chatbots Channels refer to places where users can interact with the chatbot. There are several deployment channels over which your bots can be exposed to users. These include Messaging services such as Facebook Messenger, Skype, Kik, Telegram, WeChat, and Line Office and team chat services such as Slack, Microsoft Teams, and many more Traditional channels such as the web chat, SMS, and voice calls Smart speakers such as Amazon Echo and Google Home. Choose the channel based on your users and the requirements of the project. For instance, if you are building a chatbot targeting consumers, Facebook Messenger can be the best channel because of the growing number of users who use the service already to keep in touch with friends and family. To add your chatbot to their contact list may be easier than getting them to download your app. If the user needs to interact with the bot using voice in a home or office environment, smart speaker channels can be an ideal choice. And finally, there are tools that can connect chatbots to many channels simultaneously (for example, Dialogflow integration, MS Bot Service, and Smooch.io, and so on). Chatbot development tools There are many tools that you can use to build chatbots without having to code even a single line: Chatfuel, ManyChat, Dialogflow, and so on. Chatfuel allows designers to create the conversational flow using visual elements. With ManyChat, you can build the flow using a visual map called the FlowBuilder. Conversational elements such as bot utterances and user response buttons can be configured using drag and drop UI elements. Dialogflow can be used to build chatbots that require advanced natural language understanding to interact with users. On the other hand, there are scripting languages such as Artificial Intelligence Markup Language (AIML), ChatScript, and RiveScript that can be used to build chatbots. These scripts will contain the conversational content and flow that then needs to be fed into an interpreter program or a rules engine to bring the chatbot to life. The interpreter decides how to progress the conversation by matching user utterances to templates in the scripts. While it is straightforward to build conversational chatbots using this approach, it becomes difficult to build transactional chatbots without generating explicit semantic representations of user utterances. PandoraBots is a popular web-based platform for building AIML chatbots. Alternatively, there are SDK libraries that one can use to build chatbots: MS Bot Builder, BotKit, BotFuel, and so on provide SDKs in one or more programming languages to assist developers in building the core conversational management module. The ability to code the conversational manager gives developers the flexibility to mold the conversation and integrate the bot to backend tasks better than no-code and scripting platforms. Once built, the conversation manager can then be plugged into other services such as natural language understanding to understand user utterances. Analytics in Chatbots Like other digital solutions, chatbots can benefit from collecting and analyzing their usage statistics. While you can build a bespoke analytics platform from scratch, you can also use off-the-shelf toolkits that are widely available now. Many off-the-shelf analytics toolkits are available that can be plugged into a chatbot, using which incoming and outgoing messages can be logged and examined. These tools tell chatbot builders and managers the kind of conversations that actually transpire between users and the chatbot. The data will give useful information such as the conversational tasks that are popular, places where conversational experience breaks down, utterances that the bot did not understand, and the requests which the chatbots still need to scale up to. Dashbot.io, BotAnalytics, and Google's Chatbase are a few analytic toolkits that you can use to analyze your chatbot's performance. Natural language understanding Chatbots can be built without having to understand utterances from the user. However, adding the natural language understanding capability is not very difficult. It is one of the hallmark features that sets chatbots apart from their digital counterparts such as websites and apps with visual elements. There are many natural language understanding modules that are available as cloud services. Major IT players like Google, Microsoft, Facebook, and IBM have created tools that you can plug into your chatbot. Google's Dialogflow, Microsoft LUIS, IBM Watson, SoundHound, and Facebook's Wit.ai are some of the NLU tools that you can try. Directory services One of the challenges of building the bot is to get users to discover and use it. Chatbots are not as popular as websites and mobile apps, so a potential user may not know where to look to find the bot. Once your chatbot is deployed, you need to help users find it. There are directories that list bots in various categories. Chatbots.org is one of the oldest directory services that has been listing chatbots and virtual assistants since 2008. Other popular ones are Botlist.co, BotPages, BotFinder, and ChatBottle. These directories categorize bots in terms of purpose, sector, languages supported, countries, and so on. In addition to these, channels such as Facebook and Telegram have their own directories for the bots hosted on their channel. In the case of Facebook, you can help users find your Messenger bot using their Discover service. Monetization Chatbots are built for many purposes: to create awareness, to support customers after sales, to provide paid services, and many more. In addition to all these, chatbots with interesting content can engage users for a long time and can be used to make some money through targeted personalized advertising. Services such as CashBot.ai and AddyBot.com can integrate with your chatbot to send targeted advertisements and recommendations to users, and when users engage, your chatbot makes money. In this article, we saw tools that can help you build a chatbot, collect and analyze its usage statistics, add features like natural language understanding, and many more. The aforementioned is not an exhaustive list of tools and nor are the services listed under each type. These tools are evolving over time as chatbots are finding their niche in the market. This list gives you an idea of how multidimensional the conversational UI ecosystem is and help you explore the space and feed your creative mind. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. How to build a chatbot with Microsoft Bot framework Facebook’s Wit.ai: Why we need yet another chatbot development framework? How to build a basic server side chatbot using Go
Read more
  • 0
  • 0
  • 4610
Banner background image

article-image-is-the-machine-learning-process-similar-to-how-humans-learn
Fatema Patrawala
09 Sep 2018
12 min read
Save for later

Is the machine learning process similar to how humans learn?

Fatema Patrawala
09 Sep 2018
12 min read
A formal definition of machine learning proposed by computer scientist Tom M. Mitchell states that a machine learns whenever it is able to utilize an experience such that its performance improves on similar experiences in the future. Although this definition is intuitive, it completely ignores the process of exactly how experience can be translated into future action—and of course, learning is always easier said than done! While human brains are naturally capable of learning from birth, the conditions necessary for computers to learn must be made explicit. For this reason, although it is not strictly necessary to understand the theoretical basis of learning, this foundation helps to understand, distinguish, and implement machine learning algorithms. This article is taken from the book Machine learning with R - Second Edition, written by Brett Lantz. Regardless of whether the learner is a human or machine, the basic learning process is similar. It can be divided into four interrelated components: Data storage utilizes observation, memory, and recall to provide a factual basis for further reasoning. Abstraction involves the translation of stored data into broader representations and concepts. Generalization uses abstracted data to create knowledge and inferences that drive action in new contexts. Evaluation provides a feedback mechanism to measure the utility of learned knowledge and inform potential improvements. The following figure illustrates the steps in the learning process: Keep in mind that although the learning process has been conceptualized as four distinct components, they are merely organized this way for illustrative purposes. In reality, the entire learning process is inextricably linked. In human beings, the process occurs subconsciously. We recollect, deduce, induct, and intuit with the confines of our mind's eye, and because this process is hidden, any differences from person to person are attributed to a vague notion of subjectivity. In contrast, with computers these processes are explicit, and because the entire process is transparent, the learned knowledge can be examined, transferred, and utilized for future action. Data storage for advanced reasoning All learning must begin with data. Humans and computers alike utilize data storage as a foundation for more advanced reasoning. In a human being, this consists of a brain that uses electrochemical signals in a network of biological cells to store and process observations for short- and long-term future recall. Computers have similar capabilities of short- and long-term recall using hard disk drives, flash memory, and random access memory (RAM) in combination with a central processing unit (CPU). It may seem obvious to say so, but the ability to store and retrieve data alone is not sufficient for learning. Without a higher level of understanding, knowledge is limited exclusively to recall, meaning exclusively what is seen before and nothing else. The data is merely ones and zeros on a disk. They are stored memories with no broader meaning. To better understand the nuances of this idea, it may help to think about the last time you studied for a difficult test, perhaps for a university final exam or a career certification. Did you wish for an eidetic (photographic) memory? If so, you may be disappointed to learn that perfect recall is unlikely to be of much assistance. Even if you could memorize material perfectly, your rote learning is of no use, unless you know in advance the exact questions and answers that will appear in the exam. Otherwise, you would be stuck in an attempt to memorize answers to every question that could conceivably be asked. Obviously, this is an unsustainable strategy. Instead, a better approach is to spend time selectively, memorizing a small set of representative ideas while developing strategies on how the ideas relate and how to use the stored information. In this way, large ideas can be understood without needing to memorize them by rote. Abstraction of stored data This work of assigning meaning to stored data occurs during the abstraction process, in which raw data comes to have a more abstract meaning. This type of connection, say between an object and its representation, is exemplified by the famous René Magritte painting The Treachery of Images: Source: http://collections.lacma.org/node/239578 The painting depicts a tobacco pipe with the caption Ceci n'est pas une pipe ("this is not a pipe"). The point Magritte was illustrating is that a representation of a pipe is not truly a pipe. Yet, in spite of the fact that the pipe is not real, anybody viewing the painting easily recognizes it as a pipe. This suggests that the observer's mind is able to connect the picture of a pipe to the idea of a pipe, to a memory of a physical pipe that could be held in the hand. Abstracted connections like these are the basis of knowledge representation, the formation of logical structures that assist in turning raw sensory information into a meaningful insight. During a machine's process of knowledge representation, the computer summarizes stored raw data using a model, an explicit description of the patterns within the data. Just like Magritte's pipe, the model representation takes on a life beyond the raw data. It represents an idea greater than the sum of its parts. There are many different types of models. You may be already familiar with some. Examples include: Mathematical equations Relational diagrams such as trees and graphs Logical if/else rules Groupings of data known as clusters The choice of model is typically not left up to the machine. Instead, the learning task and data on hand inform model selection. The process of fitting a model to a dataset is known as training. When the model has been trained, the data is transformed into an abstract form that summarizes the original information. It is important to note that a learned model does not itself provide new data, yet it does result in new knowledge. How can this be? The answer is that imposing an assumed structure on the underlying data gives insight into the unseen by supposing a concept about how data elements are related. Take for instance the discovery of gravity. By fitting equations to observational data, Sir Isaac Newton inferred the concept of gravity. But the force we now know as gravity was always present. It simply wasn't recognized until Newton recognized it as an abstract concept that relates some data to others—specifically, by becoming the g term in a model that explains observations of falling objects. Most models may not result in the development of theories that shake up scientific thought for centuries. Still, your model might result in the discovery of previously unseen relationships among data. A model trained on genomic data might find several genes that, when combined, are responsible for the onset of diabetes; banks might discover a seemingly innocuous type of transaction that systematically appears prior to fraudulent activity; and psychologists might identify a combination of personality characteristics indicating a new disorder. These underlying patterns were always present, but by simply presenting information in a different format, a new idea is conceptualized. Generalization for future action The learning process is not complete until the learner is able to use its abstracted knowledge for future action. However, among the countless underlying patterns that might be identified during the abstraction process and the myriad ways to model these patterns, some will be more useful than others. Unless the production of abstractions is limited, the learner will be unable to proceed. It would be stuck where it started—with a large pool of information, but no actionable insight. The term generalization describes the process of turning abstracted knowledge into a form that can be utilized for future action, on tasks that are similar, but not identical, to those it has seen before. Generalization is a somewhat vague process that is a bit difficult to describe. Traditionally, it has been imagined as a search through the entire set of models (that is, theories or inferences) that could be abstracted during training. In other words, if you can imagine a hypothetical set containing every possible theory that could be established from the data, generalization involves the reduction of this set into a manageable number of important findings. In generalization, the learner is tasked with limiting the patterns it discovers to only those that will be most relevant to its future tasks. Generally, it is not feasible to reduce the number of patterns by examining them one-by-one and ranking them by future utility. Instead, machine learning algorithms generally employ shortcuts that reduce the search space more quickly. Toward this end, the algorithm will employ heuristics, which are educated guesses about where to find the most useful inferences. Heuristics are routinely used by human beings to quickly generalize experience to new scenarios. If you have ever utilized your gut instinct to make a snap decision prior to fully evaluating your circumstances, you were intuitively using mental heuristics. The incredible human ability to make quick decisions often relies not on computer-like logic, but rather on heuristics guided by emotions. Sometimes, this can result in illogical conclusions. For example, more people express fear of airline travel versus automobile travel, despite automobiles being statistically more dangerous. This can be explained by the availability heuristic, which is the tendency of people to estimate the likelihood of an event by how easily its examples can be recalled. Accidents involving air travel are highly publicized. Being traumatic events, they are likely to be recalled very easily, whereas car accidents barely warrant a mention in the newspaper. The folly of misapplied heuristics is not limited to human beings. The heuristics employed by machine learning algorithms also sometimes result in erroneous conclusions. The algorithm is said to have a bias if the conclusions are systematically erroneous, or wrong in a predictable manner. For example, suppose that a machine learning algorithm learned to identify faces by finding two dark circles representing eyes, positioned above a straight line indicating a mouth. The algorithm might then have trouble with, or be biased against, faces that do not conform to its model. Faces with glasses, turned at an angle, looking sideways, or with various skin tones might not be detected by the algorithm. Similarly, it could be biased toward faces with certain skin tones, face shapes, or other characteristics that do not conform to its understanding of the world. In modern usage, the word bias has come to carry quite negative connotations. Various forms of media frequently claim to be free from bias, and claim to report the facts objectively, untainted by emotion. Still, consider for a moment the possibility that a little bias might be useful. Without a bit of arbitrariness, might it be a bit difficult to decide among several competing choices, each with distinct strengths and weaknesses? Indeed, some recent studies in the field of psychology have suggested that individuals born with damage to portions of the brain responsible for emotion are ineffectual in decision making, and might spend hours debating simple decisions such as what color shirt to wear or where to eat lunch. Paradoxically, bias is what blinds us from some information while also allowing us to utilize other information for action. It is how machine learning algorithms choose among the countless ways to understand a set of data. Evaluate the learner’s success Bias is a necessary evil associated with the abstraction and generalization processes inherent in any learning task. In order to drive action in the face of limitless possibility, each learner must be biased in a particular way. Consequently, each learner has its weaknesses and there is no single learning algorithm to rule them all. Therefore, the final step in the generalization process is to evaluate or measure the learner's success in spite of its biases and use this information to inform additional training if needed. Generally, evaluation occurs after a model has been trained on an initial training dataset. Then, the model is evaluated on a new test dataset in order to judge how well its characterization of the training data generalizes to new, unseen data. It's worth noting that it is exceedingly rare for a model to perfectly generalize to every unforeseen case. In parts, models fail to perfectly generalize due to the problem of noise, a term that describes unexplained or unexplainable variations in data. Noisy data is caused by seemingly random events, such as: Measurement error due to imprecise sensors that sometimes add or subtract a bit from the readings Issues with human subjects, such as survey respondents reporting random answers to survey questions, in order to finish more quickly Data quality problems, including missing, null, truncated, incorrectly coded, or corrupted values Phenomena that are so complex or so little understood that they impact the data in ways that appear to be unsystematic Trying to model noise is the basis of a problem called overfitting. Because most noisy data is unexplainable by definition, attempting to explain the noise will result in erroneous conclusions that do not generalize well to new cases. Efforts to explain the noise will also typically result in more complex models that will miss the true pattern that the learner tries to identify. A model that seems to perform well during training, but does poorly during evaluation, is said to be overfitted to the training dataset, as it does not generalize well to the test dataset. Solutions to the problem of overfitting are specific to particular machine learning approaches. For now, the important point is to be aware of the issue. How well the models are able to handle noisy data is an important source of distinction among them. We saw that machine learning process is similar to how humans learn in their daily lives.To To discover how to build machine learning algorithms, prepare data, and dig deep into data prediction techniques with R, check out this book Machine learning with R - Second edition. A Machine learning roadmap for Web Developers Why TensorFlow always tops machine learning and artificial intelligence tool surveys Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 8054

article-image-messaging-app-telegram-updated-privacy-policy-open-challenge
Amarabha Banerjee
08 Sep 2018
7 min read
Save for later

Messaging app Telegram's updated Privacy Policy is an open challenge

Amarabha Banerjee
08 Sep 2018
7 min read
Social media companies are facing a lot of heat presently because of their privacy issues. One of them is Facebook. The Cambridge analytica scandal had even prompted a senate hearing for Mark Zuckerberg. On the other end of this spectrum, there is another messaging app known as Telegram, registered in London, United Kingdom, founded by the Russian entrepreneur Pavel Durov. Telegram has been in the news for an absolutely opposite situation. It’s often touted as one of the most secure and secretive messaging apps. The end to end encryption ensures that security agencies across the world have a tough time getting access to any suspicious piece of information. For this reason Russia has banned the use of Telegram app on April 2018. Telegram updated their privacy policies on . These updates have further ensured that Telegram will retain the title of the most secure messaging application in the planet. It’s imperative for any messaging app to get access to our data. But how they choose to use it makes you either vulnerable or secure. Telegram in their latest update have stated that they process personal data on the grounds that such processing caters to the following two goals: Providing effective and innovative Services to our users To detect, prevent or otherwise address fraud or security issues in respect of their provision of Services. The caveat for the second point being the security interests shall not override the space of fundamental rights and freedoms that require protection of personal data. This clause is an excellent example on how applications can prove to be a torchbearer for human rights and basic human privacy amidst glaring loopholes. Telegram have listed the the kind of user data accessed by the app. They are as follows: Basic Account Data Telegram stores basic account user data that includes mobile number, profile name, profile picture and about information, which are needed  to create a Telegram account. The most interesting part of this is Telegram allows you to only keep your username (if you choose to) public. The people who have you in their contact list will see you as you want them to - for example you might be a John Doe in public, but your mom will still see you as ‘Dear Son’ in their contacts. Telegram doesn’t require your real name, gender, age or even your screen name to be your real name. E-mail Address When you enable 2-step-verification for your account or store documents using the Telegram Passport feature, you can opt to set up a password recovery email. This address will only be used to send you a password recovery code if you forget it. They are particular about not sending any unsolicited marketing emails to you. Personal Messages Cloud Chats Telegram stores messages, photos, videos and documents from your cloud chats on their servers so that you can access your data from any of your devices anytime without having to rely on third-party backups. All data is stored heavily encrypted and the encryption keys in each case are stored in several other data centers in different jurisdictions. This way local engineers or physical intruders cannot get access to user data. Secret Chats Telegram has a feature called Secret chats that uses end-to-end encryption. This means that all data is encrypted with a key that only the sender and the recipients know. There is no way for us or anybody else without direct access to your device to learn what content is being sent in those messages. Telegram does not store ‘secret chats’ on their servers. They also do not keep any logs for messages in secret chats, so after a short period of time there is no way of determining who or when you messaged via secret chats. Secret chats are not available in the cloud — you can only access those messages from the device they were sent to or from. Media in Secret Chats When you send photos, videos or files via secret chats, before being uploaded, each item is encrypted with a separate key, not known to the server. This key and the file’s location are then encrypted again, this time with the secret chat’s key — and sent to your recipient. They can then download and decipher the file. This means that the file is technically on one of Telegram’s servers, but it looks like a piece of random indecipherable garbage to everyone except for you and the recipient. This complete process is random and there random data packets are periodically purged from the storage disks too. Public Chats In addition to private messages, Telegram also supports public channels and public groups. All public chats are cloud chats. Like everything else on Telegram, the data you post in public communities is encrypted, both in storage and in transit — but everything you post in public will be accessible to everyone. Phone Number and Contacts Telegram uses phone numbers as unique identifiers so that it is easy for you to switch from SMS and other messaging apps and retain your social graph. But the most important thing is that permissions from the users are a must before the cookies are allowed into your browser. Cookies Telegram promises that the only cookies they use are those to operate and provide their Services on the web. They clearly state that they don’t use cookies for profiling or advertising. Their cookies are small text files that allow them to provide and customize their Services, and provide an enhanced user experience. Also, whether or not to use these cookies is a choice made by the users. So, how does Telegram remain in business? The Telegram business model doesn’t match that of a revenue generating service. The founder Pavel Durov is also the founder of the popular Russian social networking site VK. Telegram doesn’t charge for any messaging services, it doesn’t show ads yet. Some new in app purchase features might be included in the new version. As of now, the main source of revenue for Telegram are donations and mainly the earnings of Pavel Durov himself (from the social networking site VK). What can social networks learn from Telegram? Telegram’s policies elevate privacy standards that many are asking from other social messaging apps. The clamour for stopping the exploitation of user data, using their location details for targeted marketing and advertising campaigns is increasing now. Telegram shows that privacy can be achieved, if intended, in today’s overexposed social media world. But there is are also costs to this level of user privacy and secrecy, that are sometimes not discussed enough. The ISIS members behind the 2015 Paris attacks used Telegram to spread propaganda. ISIS also used the app to recruit the perpetrators of the Christmas market attack in Berlin last year and claimed credit for the massacre. More recently, a Turkish prosecutor found that the shooter behind the New Year’s Eve attack at the Reina nightclub in Istanbul used Telegram to receive directions for it from an ISIS leader in Raqqa. While these incidents can never negate the need for a secure and less intrusive social media platform like Telegram, there should be workarounds and escape routes designed for stopping extremists and terrorist activities. Telegram have assured that all ISIS messaging channels are deleted from their network which is a great way to start. Content moderation, proactive sentiment and pattern recognition and content/account isolation are the next challenges for Telegram. One thing is for sure, Telegram’s continual pursuance of user secrey and user data privacy is throwing an open challenge to others to follow suite. Whether others will oblige or not, only time will tell. To read about Telegram’s updated privacy policies in detail, you can check out the official Telegram Privacy Settings. How to stay safe while using Social Media Time for Facebook, Twitter and other social media to take responsibility or face regulation What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies
Read more
  • 0
  • 0
  • 5262

article-image-how-to-secure-your-crypto-currency
Guest Contributor
08 Sep 2018
8 min read
Save for later

How to secure your crypto currency

Guest Contributor
08 Sep 2018
8 min read
Managing and earning cryptocurrency is a lot of hassle and losing it is a lot like losing yourself. While security of this blockchain based currency is a major concern, here is what you can do to secure your crypto fortune. With the ever fluctuating crypto-rates, every time, it’s now or never. While Bitcoin climbed up to $17,900 in the past, the digital currency frenzy is always in-trend and its security is crucial. No crypto geek wants to lose their currency due to malicious activities, negligence or any other reason. Before we delve into securing our crypto currencies, lets discuss the structure and strategy of this crypto vault that ensures the absolute security of a blockchain based digital currency. Why blockchains are secure, at least, in theory Below are the three core elements that contribute in making blockchain a fool proof digital technology.        Public key cryptography        Hashing        Digital signatures Public Key Cryptography This cryptography involves two distinctive keys i.e., private and public keys. Both keys decrypt and encrypt data asymmetrically. Both have simultaneous dependency of data which is encrypted by a private key and can only be decrypted with the public key. Similarly, data decrypted by public key can only be decrypted by a private key. Various cryptography schemes including TLS (Transport Layer Security protocol) and SSL (Secure Sockets Layer) have this system at its core. The strategy works well with you putting in your public key into the world of blockchain and keeping your private key confidential, not revealing it on any platform or place. Hashing Also called a digest, the hash of a message gets calculated on the basis of the contents of a message. The hashing algorithm generates a hash that is created deterministically. Data of an arbitrary length acts an input to the hashing algorithm. The outcome of this complex process is known as a calculated amount of hash with a predefined length. Due to its deterministic nature, the input and output are the same. Considering mathematical calculations, it’s easy to convert a message into hash but when it comes to obtaining an original message from hash, it is tediously difficult. Digital Signatures A digital signature is an encrypted form of hash of a message and is an outcome of a private key. Anyone who has the access to the public key can break into the digital signature by decrypting it and this can be used to get the original hash. Anyone who can read the message can calculate the hash of a message on its own. The independently calculated hash can be compared with the decrypted hash to ensure both the hashes are the same. If they both match, it is a confirmation that the message remains unaltered from creation to reception. Additionally, it is a sign of a relating private key digitally signing the message. A hash is extracted from a message and if a message gets altered, it will produce a different type of hash. Note that it is complex to reverse the process to find the message of a hash but it’s easy to compute the hash of a message. A hash that is encrypted by a private key is known as digital signature. Anyone having a public key can decrypt a digital signature and they have the ability to compare the digital signature with a calculated hash of the message. If the value of an original message is active and the message is signed by the entity having the private key, it means that the hashes are identical. What are Crypto wallets and transactions Every crypto-wallet is a combined collection of single or more wallets. A crypto-wallet is a private key and it can create a public key too. By using a public key, a public wallet address can be easily created. This makes a cryptocurrency wallet a set of private keys. To enable sharing wallet address with the public, they are converted into QR codes eliminated the need to maintain secrecy. One can always show QR codes to the world without any hesitation and anyone can send cryptocurrency using that wallet address. However, a cryptocurrency transaction needs a private key and currency sent into a wallet is owned by the owner of the wallet. In order to transact using cryptocurrency, a transaction is created that is public information. A transaction of crypto currency is a collection of information a blockchain needs. The only needed data for a transaction is the destination wallet’s address and the desired amount to be transferred. While anyone can transact in cryptocurrency, the transactions are only permitted by the blockchain if it is assured by multiple members in the network. A transaction should be digitally signed by a private key in order to get a valid status or else, it would be treated as invalid. In other words, one signs a transaction with the private key and then it gets to the blockchain. Once the blockchain accepts the key by confirming the public key data, it gets included in the blockchain that validates the transaction. Why you should guard your private key An attack on your private key is an attempt to steal your cryptocurrency. By using your private keys, an attacker attempts to digitally sign transactions from your wallet address to their address. Moreover, an attacker can destroy your private keys thus ending your access to your crypto wallet. What are some risk factors involved in owning a crypto wallet Before we move on to creating a security wall around our crypto currency, it is important to know from whom we are protecting our digital currency or who can prove to be a threat for our crypto wallets. If you lose the access to your crypto currency, you have lost it all as there isn’t any ledger with a centralized authority and once you lose the access, you can't regain it by any means. Since a crypto wallet is paired by a private and public key, losing the private key means losing your wallet. In other words, you don’t own any cryptocurrency. This is the very first and foremost threat. The next in line threat is what we hear often. Attackers, hackers or attempters who want to gain access to our cryptocurrency. The malfunctions may be opportunist or they may have their private intentions. Threats for your cryptocurrency Opportunist hackers are low profile attackers who get access to your laptop for transacting money to their public wallet address. Opportunist hackers doesn’t attack or target a person specifically, but if they get access to your crypto currency, they won’t shy away from taking your digital cash. Dedicated attackers, on the other hand, target single handedly or they may be in a group of hackers who work together for a sole purpose that is – stealing cryptocurrency. Their targets include every individual, crypto trader or even a crypto exchange. They initiate phishing campaigns and before executing the attack, they get well-versed with their target by conducting a pre-research. Level 2 attackers go for a broader approach and write malicious code that may steal private keys from a system if it gets attacked or infected. Another kind of hackers are backed by nation states. They are a collective group of people with top level coordination and established financials. They are motivated by gaining access to finances or their will. The crypto currency attacks by Lazarus Group, backed by the North Korea, are an example. How to Protect Your crypto wallet Regardless of the kind of threat, it is you and your private key that needs to be secured. Here’s how to ensure maximum security of your cryptocurrency. Throw away your access keys and you will lose your cryptocurrency forever. Obviously, you won’t do it ever and since the aforementioned thought came into your mind after reading the phrase, here are some other ways to secure your cryptocurrency fortune.       Go through the complete password recovery process. This means going through the process of forgetting the password and creating a multi-factor token. These measures should be taken while setting up a new hosted wallet or else, be prepared to lose it all.       No matter how fast the tech world progresses, basics will remain the same. You should have a printed paper backup of your keys and they should be placed in a secure location such as a bank’s locker or in a personal safe vault. Don’t forget to wipe out the printer’s memory after you are done with printing as printed files can be restored and re used to hack your digital money.       Do not keeps those keys with you nor should you be hiding those keys in a closet that can get damaged due to fire, theft, etc.       If your wallet has multi-signature enabled on it and has two public or private keys for the authorization of transactions, make it to three keys. While the third key will be controlled by an entrusted party, it will help you in the absence of a second person. About Author Tahha Ashraf is a Digital Content Producer at Cubix, a mobile app development company. He is a Certified Hubspot inbound and content marketer. He loves talking about brands, tech, blockchain and content marketing. Along with writing for the online fraternity on a variety of topics, he is fond of creativity and writes poetry in his free time. Cryptocurrency-based firm, Tron acquires BitTorrent Can Cryptocurrency establish a new economic world order? Akon is planning to create a cryptocurrency city in Senegal    
Read more
  • 0
  • 0
  • 3260
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-machine-learning-as-a-service-mlaas-how-google-cloud-platform-microsoft-azure-and-aws-are-democratizing-artificial-intelligence
Bhagyashree R
07 Sep 2018
13 min read
Save for later

Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence

Bhagyashree R
07 Sep 2018
13 min read
There has been a huge shift in the way that businesses build technology in recent years driven by a move towards cloud and microservices. Public cloud services like AWS, Microsoft Azure, and Google Cloud Platform are transforming the way companies of all sizes understand and use software. Not only do public cloud services reduce the resourcing costs associated with on site server resources, they also make it easier to leverage cutting edge technological innovations like machine learning and artificial intelligence. Cloud is giving rise to what’s known as ‘Machine Learning as a Service’ - a trend that could prove to be transformative for organizations of all types and sizes. According to a report published on Research and Markets, Machine Learning as a Service is set to face a compound annual growth rate (CAGR) of 49% between 2017 and 2023. The main drivers of this growth include the increased application of advanced analytics in manufacturing, the high volume of structured and unstructured data, and the integration of machine learning with big data. Of course, with machine learning a relatively new area for many businesses, demand for MLaaS is ultimately self-fulfilling - if it’s there and people can see the benefits it can bring, demand is only going to continue. But it’s important not to get fazed by the hype. Plenty of money will be spent on cloud based machine learning products that won’t help anyone but the tech giants who run the public clouds. With that in mind, let’s dive deeper into Machine Learning as a Service and what the biggest cloud vendors offer. What does Machine Learning as a Service (MLaaS) mean? Machine learning as a Service (MLaaS) is an array of services that provides machine learning tools to users. Businesses and developers can incorporate a machine learning model into their application without having to work on its implementation. These services range from data visualization, facial recognition, natural language processing, chatbots, predictive analytics and deep learning, among others. Typically, for a given machine learning task, a user has to perform various steps. These steps include data preprocessing, feature identification, implementing the machine learning model, and training the model. MLaaS services simplify this process by only exposing a subset of the steps to the user while automatically managing the remaining steps. Some services can also provide 1-click mode, where the users does not have to perform any of the steps mentioned earlier. What type of businesses can benefit from Machine Learning as a Service? Large companies Large companies can afford to hire expert machine learning engineers and data scientists, but they still have to build and manage their own custom machine learning model. This is time-intensive and complicated process. By leveraging MLaaS services these companies can use pre-trained machine learning models via APIs that perform specific tasks and save time. Small and mid-sized businesses Big companies can invest in their own machine learning solutions because they have the resources. For small and mid-sized businesses (SMBs), however, this simply isn’t the case. Fortunately, MLaaS changes all that and makes machine learning accessible to organizations with resource limitations. By using MLaaS, businesses can leverage machine learning without the huge investment in infrastructure or talent. Whether it’s for smarter and more intelligent customer-facing apps, or improved operational intelligence and automation, this could bring huge gains for a reasonable amount of spending. What types of roles will benefit from MLaaS? Machine learning can contribute to any kind of app development provided you have data to train your app. However, adding AI features to your app is not easy. As a developer, you’ve to worry about a lot of other factors besides regular app development checklist, in order to make your app intelligent. Some of them are: Data preprocessing Model training Model evaluation Predictions Expertise in data science The development tools provided by MLaaS can simplify these tasks allowing you to easily embed machine learning in your applications. Developers can build quickly and efficiently with MLaaS offerings, because they have access to pre-built algorithms and models that would take them extensive resources to build otherwise. MLaaS can also support data scientists and analysts. While most data scientists should have the necessary skills to build and train machine learning models from scratch, it can nevertheless still be a time consuming task. MLaaS can, as already mentioned, simplify the machine learning engineering process, which means data scientists can focus on optimizations that require more thought and expertise. Top machine learning as a service (MLaaS) providers Amazon Web Services (AWS), Azure, and Google, all have MLaaS products in their cloud offerings. Let’s take a look at them. Google Cloud AI at a glance Google Cloud AI Google’s Cloud AI provides modern machine learning services. It consists of pre-trained models and a service to generate your own tailored models. The services provided are fast, scalable, and easy to use. The following are the services that Google provides at an unprecedented scale and speed to your applications: Cloud AutoML Beta It is a suite of machine learning products, with the help of which developers with limited machine learning expertise can train high-quality models specific to their business needs. It provides you a simple GUI to train, evaluate, improve, and deploy models based on your own data. Read also: AmoebaNets: Google’s new evolutionary AutoML Google Cloud Machine Learning (ML) Engine Google Cloud Machine Learning Engine is a service that offers training and prediction services to enable developers and data scientists to build superior machine learning models and deploy in production. You don’t have to worry about infrastructure and can instead focus on the model development and deployment. It offers two types of predictions: Online prediction deploys ML models with serverless, fully managed hosting that responds in real time with high availability. Batch predictions is cost-effective and provides unparalleled throughput for asynchronous applications. Read also: Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Google BigQuery It is a cloud data warehouse for data analytics. It uses SQL and provides Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers to make integration fast and easy. It provides benefits like auto scaling and high-performance streaming to load data. You can create amazing reports and dashboards using your favorite BI tool, like Tableau, MicroStrategy, Looker etc. Read also: Getting started with Google Data Studio: An intuitive tool for visualizing BigQuery Data Dialogflow Enterprise Edition Dialogflow is an end-to-end, build-once deploy-everywhere development suite for creating conversational interfaces for websites, mobile applications, popular messaging platforms, and IoT devices. Dialogflow Enterprise Edition users have access to Google Cloud Support and a service level agreement (SLA) for production deployments. Read also: Google launches the Enterprise edition of Dialogflow, its chatbot API Cloud Speech-to-Text Google Cloud Speech-to-Text allows you to convert speech to text by applying neural network models. 120 languages are supported by the API, which will help you extend your user base. It can process both real-time streaming and prerecorded audio. Read also: Google announce the largest overhaul of their Cloud Speech-to-Text Microsoft Azure AI at a glance The Azure platform consists of various AI tools and services that can help you build smart applications. It provides Cognitive Services and Conversational AI with Bot tools, which facilitate building custom models with Azure Machine Learning for any scenario. You can run AI workloads anywhere at scale using its enterprise-grade AI infrastructure The following are services provided by Azure AI to help you achieve maximum productivity and reliability: Pre-built services You need not be an expert in data science to make your systems more intelligent and engaging. The pre-built services come with high-quality RESTful intelligent APIs for the following: Vision: Make your apps identify and analyze content within images and videos. Provides capabilities such as, image classification, optical character recognition in images, face detection, person identification, and emotion identification. Speech: Integrate speech processing capabilities in your app or services such as, text-to-speech, speech-to-text, speaker recognition, and speech translation. Language: Your application or service will understand meaning of the unstructured text or the intent behind a speaker's utterances. It comes with capabilities such as, text sentiment analysis, key phrase extraction, automated and customizable text translation. Knowledge: Create knowledge rich resources that can be integrated into apps and services. It provides features such as, QnA extraction from unstructured text, knowledge base creation from collections of Q&As, and semantic matching for knowledge bases. Search: Using Search API you can find exactly what you are looking for across billions of web pages. It provides features like, ad-free, safe, location-aware web search, Bing visual search, custom search engine creation, and many more. Custom services Azure Machine Learning is a fully managed cloud service which helps you to easily prepare data, build, and train your own models: You can rapidly prototype on your desktop, then scale up on VMs or scale out using Spark clusters. You can manage model performance, identify the best model, and promote it using data-driven insight. Deploy and manage your models everywhere. Using Docker containers, you can deploy the models into production faster in the cloud, on-premises or at the edge. Promote your best performing models into production and retrain them whenever necessary. Read also: Microsoft supercharges its Azure AI platform with new features AWS machine learning services at a glance Machine learning services provided by AWS help developers to easily add intelligence to any application with pre-trained services. For training and inferencing, it offers a broad array of compute options with powerful GPU-based instances, compute and memory optimized instances, and even FPGAs. You will get to choose from a set of services for data analysis including data warehousing, business intelligence, batch processing, stream processing, and data workflow orchestration. The following are the services provided by AWS: AWS machine learning applications Amazon Comprehend: This is a natural language processing (NLP) service that identifies relationships and finds insights in text using machine learning. It recognizes the language of the text and understands how positive or negative it is and extracts key phrases, places, people, brands, or events. It then analyzes text using tokenization and parts of speech, and automatically organizes a collection of text files by topic. Amazon Lex: This service provides the same deep learning technologies used by Amazon Alexa to developers in helping them build sophisticated, natural language, conversational bots easily. It comes with advanced deep learning functionalities like, automatic speech recognition (ASR) and natural language understanding (NLU) to facilitate a more life like conversational interaction with the users. Amazon Polly: This text-to-speech service produces speech that sounds like human voice using advanced deep learning technologies. It provides you dozens of life like voices across a variety of languages. You can simply select the ideal voice and build speech-enabled applications that work in many different countries. Amazon Rekognition: This service can identify the objects, people, text, scenes, and activities, and any inappropriate content in an image or a video. It also provides highly accurate facial analysis and facial recognition on images and video. Read also: AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers AWS machine learning platforms Amazon SageMaker: It is a platform that solves the complexities in the machine learning process, from building to deploying a model. It is a fully-managed platform that helps developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. AWS DeepLens: It is a fully programmable video camera, which comes with tutorials, code, and pre-trained models designed to expand deep learning skills. It provides you sample projects giving you practical and hands-on experience in deep learning in less than 10 minutes. Models trained in Amazon SageMaker can be sent to AWS DeepLens with just a few clicks from the AWS Management Console. Amazon ML: This is a service that provides visualization tools and wizards that direct you to create a machine learning model without having to learn complex ML algorithms and technology. Using simple APIs it makes it easy for you to obtain predictions for your application. It is highly scalable and can generate billions of predictions daily, and serve those predictions in real-time and at high throughput Read also: Amazon Sagemaker makes machine learning on the cloud easy. Deep Learning on AWS AWS Deep Learning AMIs: This provides the infrastructure and tools to accelerate deep learning in the cloud, at any scale. To train sophisticated, custom AI models, or to experiment with new algorithms you can quickly launch Amazon EC2 instances which are pre-installed in popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras. Apache MXNet on AWS: This is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning. It allows developers of all skill levels to get started with deep learning on the cloud, on edge devices, and mobile apps using Gluon. You can build linear regressions, convolutional networks and recurrent LSTMs for object detection, speech recognition, recommendation, and personalization, in just a few lines of Gluon code. TensorFlow on AWS: You can quickly and easily get started with deep learning in the cloud using TensorFlow. AWS provides you a fully-managed TensorFlow experience with Amazon SageMaker. You can also use the AWS Deep Learning AMIs to build custom environment and workflow with TensorFlow and other popular frameworks such as Apache MXNet and Gluon, Caffe, Caffe2, Chainer, Torch, Keras, and Microsoft Cognitive Toolkit. Conclusion Machine learning and artificial intelligence can be expensive - skills and resources can cost a lot. For that reason, MLaaS is going to be a hugely influential development within cloud. Yes, the range of services on offer are impressive from AWS, Azure and GCP, but it’s really the ease and convenience that is most remarkable. With these services it’s easy to set up and run machine learning algorithms that enhance business processes and operations, customer interactions and overall business strategy. You don’t need a PhD, and you don’t need to code algorithms from scratch. The MLaaS market will likely continue to grow as more companies realise the potential machine learning has on their business - however, whether anyone can deliver a better set of services than the established cloud providers remains to be seen. Predictive Analytics with AWS: A quick look at Amazon ML Microsoft supercharges its Azure AI platform with new features AmoebaNets: Google’s new evolutionary AutoML
Read more
  • 0
  • 0
  • 8434

article-image-how-artificial-intelligence-and-machine-learning-can-turbocharge-a-game-developers-career
Guest Contributor
06 Sep 2018
7 min read
Save for later

How Artificial Intelligence and Machine Learning can turbocharge a Game Developer's career

Guest Contributor
06 Sep 2018
7 min read
Gaming - whether board games or games set in the virtual realm - has been a massively popular form of entertainment since time immemorial. In the pursuit of creating more sophisticated, thrilling, and intelligent games, game developers have delved into ML and AI technologies to fuel innovation in the gaming sphere. The gaming domain is the ideal experimentation bed for evolving technologies because not only do they put up complex and challenging problems for ML and AI to solve, they also pose as a ground for creativity - a meeting ground for machine learning and the art of interaction. Machine Learning and Artificial Intelligence in Gaming The reliance on AI for gaming is not a recent development. In fact, it dates back to 1949, when the famous cryptographer and mathematician Claude Shannon made his musings public about how a supercomputer could be made to master Chess. Then again, in 1952, a graduate student in the UK developed an AI that could play tic-tac-toe with ultimate perfection. Source : Medium However, it isn’t just ML and AI that are progressing through experimentations on games. Game development, too, has benefited a great deal from these pioneering technologies. AI and ML have helped enhance the gaming experience on many grounds such as gaming design, the interactive quotient, as well as the inner functionalities of games. The above mentioned AI use cases focus on two primary things: one is to impart enhanced realism in virtual gaming environment and the second is to create a more naturalistic interface between the gaming environment and the players. As of now, the focus of game developers, data scientists, and ML researchers lies in two specific categories of the gaming domain - games of perfect information and games of imperfect information. In games of perfect information, a player is aware of all the aspects of the game throughout the playing session, whereas, in games of imperfect information, players are oblivious to specific aspects of the game. When it comes to games of perfect information such as Chess and Go, AI has shown various instances of overpowering human intelligence. Back in 1997, IBM’s Deep Blue successfully defeated world Chess champion, Garry Kasparov in a six-game match. In 2016, Google’s AlphaGo emerged as the victor in a Go match scoring 4-1 after defeating South Korean Go champion, Lee Sedol. One of the most advanced chess AIs developed yet, Stockfish, uses a combination of advanced heuristics and brute force to compute numeric values for each and every move in a specific position in Chess. It also effectively eliminates bad moves using the Alpha-beta pruning search algorithm. While the progress and contribution of AI and ML to the field of games of perfect information is laudable, researchers are now intrigued by games of imperfect information. Games of imperfect information offer much more challenging situations that are essentially difficult for machines to learn and master. Thus, the next evolution in the world of gaming will be to create spontaneous gaming environment using AI technology, in which developers will build only the gaming environment and its mechanics instead of creating a game with pre-programmed/scripted plots. In such a scenario, the AI will have to confront and solve spontaneous challenges with personalized scenarios generated on the spot. Games like StarCraft and StarCraft II have stirred up massive interest among game researchers and developers. In these games, the players are only partially aware of the gaming aspects and the game is largely determined not just by the AI moves and the previous state of the game, but also by the moves of other players. Since in these games you will have little knowledge about your rival’s moves, you have to take decisions on the go and your moves have to be spontaneous. The recent win of OpenAI Five over amateur human players in Dota2 is a good case in point. OpenAI Five is a team of five neural networks that leverages an advanced version of Proximal Policy Optimization and uses a separate LSTM to learn identifiable strategies. The progress of OpenAI Five shows that even without human data, reinforcement learning can facilitate long-term planning, thus, allowing us to make further progress in the games of imperfect information. Career in Game Development With ML and AI As ML and AI continue to penetrate the gaming industry, it is creating a huge demand for talented and skilled game developers who are well-versed in these technologies. Today, game development is at a place where it’s no longer necessary to build games using time-consuming manual techniques. ML and AI have made the task of game developers easier as by leveraging these technologies, they can design and build innovative gaming environment, and test them automatically. The integration of AI and ML in the gaming domain is giving birth to new job positions like Gameplay Software Engineer (AI), Gameplay Programmer (AI), and Game Security Data Scientist, to name a few. The salaries of traditional game developers is in stark contrast with that of those having AI/ML skills. While the average salary of game developers is usually around $44,000, it can scale up to and over $1,20,000 if one possesses AI/ML skills. Gameplay Engineer Average salary - $73,000 - $1,16,000 Gameplay engineers are usually part of the core game dev team and are entrusted with the responsibility of enhancing the existing gameplay systems to enrich the player experience. Companies today demand for gameplay engineers who are proficient in C/C++ and well-versed with AI/ML technologies. Gameplay Programmer Average salary - $98,000 - $1,49,000 Gameplay programmers work in close collaboration with the production and design team to develop cutting edge features in the existing and upcoming gameplay systems. Programming skills are a must and knowledge of AI/ML technologies is an added bonus. Game Security Data Scientist Average salary - $73,000 - $1,06,000 The role of a gameplay security data scientist is to combine both security and data science approaches to detect anomalies and fraudulent behavior in games. This calls for a high degree of expertise in AI, ML, and other statistical methods. With impressive salaries and exciting job opportunities cropping up fast in the game development sphere, the industry is attracting some major talent towards it. Game developers and software developers around the world are choosing the field due to the promises of rapid career growth. If you wish to bag better and more challenging roles in the domain of game development, you should definitely try and upskill your talent and knowledge base by mastering the fields of ML and AI. Packt Publishing is the leading UK provider of Technology eBooks, Coding eBooks, Videos and Blogs; helping IT professionals to put software to work. It offers several books and videos on Game development with AI and machine learning. It’s never too late to learn new disciplines and expand your knowledge base. There are numerous online platforms that offer great artificial intelligent courses. The perk of learning from a registered online platform is that you can learn and grow at your own pace and according to your convenience. So, enroll yourself in one and spice up your career in game development! About Author: Abhinav Rai is the Data Analyst at UpGrad, an online education platform providing industry oriented programs in collaboration with world-class institutes, some of which are MICA, IIIT Bangalore, BITS and various industry leaders which include MakeMyTrip, Ola, Flipkart etc.   Best game engines for AI game development Implementing Unity game engine and assets for 2D game development [Tutorial] How to use arrays, lists, and dictionaries in Unity for 3D game development      
Read more
  • 0
  • 0
  • 6322

article-image-a-non-programmers-guide-to-learning-machine-learning
Natasha Mathur
05 Sep 2018
11 min read
Save for later

A non programmer’s guide to learning Machine learning

Natasha Mathur
05 Sep 2018
11 min read
Artificial intelligence might seem intimidating, but it isn’t actually as complex as you might think. Many of the tools that have been developed over the last decade or so have all helped to make artificial intelligence and machine learning more accessible to engineers with varying degrees of experience and knowledge. Today, we’ve got to a stage where it’s now accessible even to people who have barely written a line of code in their life! Pretty exciting, right? But if you’re completely new to the field, it can be challenging to know how to get started - fortunately, we’re about to help you overcome that first hurdle. If you are an AI denier, then be sure to first read ‘why learn Machine Learning as a non-techie’ before you move forward. A strong purpose and belief is the first step to learning anything new. Alright, now here’s how you can get started with artificial intelligence and machine learning techniques quickly. 0. Use a free MLaaS or a no code interactive machine learning tool to experience first hand what is possible with learning machine learning: Some popular examples of no code machine learning as a service option are Microsoft Azure, BigML, Orange, and Amazon ML. Read Q2 under the FAQ section below to know more on this topic. 1. Learn Linear Algebra: Linear Algebra is the elementary unit for ML. It helps you effectively comprehend the theory behind the Machine learning algorithms and how they work. It also improves your math skills such as statistics, programming skills, which are all other skills that helps in ML. Learning Resources: Linear Algebra for Beginners: Open Doors to Great Careers Linear algebra Basics 2. Learn just enough Python or any programming: Now, you can get started with any language of your interest, but we suggest Python as  it’s great for people who are new to programming. It’s easy to learn due to its simple syntax. You’ll be able to quickly implement the ML algorithms. Also,  It has a rich development ecosystem that offers a ton of libraries and frameworks in Machine Learning such as Scikit Learn, Lasagne, Numpy, Scipy, Theano, Tensorflow, etc. Learning Resources: Python Machine Learning Learn Python in 7 Days Python for Beginners 2017 [Video] Learn Python with codecademy Python editor for beginner programmers 3. Learn basic Probability Theory and statistics: A lot of fundamental Statistical and Probability Theories form the basis for ML. You’ve probably already learned Probability and statistics in school, it easy to dive into advanced statistics for ML. Machine learning in its currently widely used form is a way to predict odds and see patterns. Knowing statistics and probability is important as it will help you with better understanding of why any machine learning algorithm works. For example, your grounding in this area, will help to ask the right questions, choose the right set of algorithms and know what to expect as answers from your ML model on questions such as: What are the odds of this person also liking this movie given their current movie watching choices ( Collaborative filtering and content-based filtering) How similar is this user to that group of users who brought a bunch of stuff on my site (clustering, collaborative filtering, and classification) Could this person be at risk of cancer given a certain set of traits and health indicator observations (logistic regression) Should you buy that stock (decision tree) Also, check out our interview with James D. Miller to know more about why learning stats is important in this field. Learning resources: Statistics for Data Science [Video] 4. Learn machine learning algorithms: Do not get intimidated!  You don’t have to be an expert to learn ML algorithms. Knowing basic ML algorithms that are majorly used in the real world applications like linear regression, naive Bayes, and decision trees, are enough to get you started. Learn what they do and how they are used in Machine Learning. 5. Learn numpy sci-kit learn,Keras or any other popular machine learning framework: It can be confusing initially to decide which framework to learn. Each one has its own advantages and disadvantages. Numpy is a linear algebra library which is useful for performing mathematical and logical operations. You can easily work with large multidimensional arrays using Numpy. Sci-kit learn helps with quick implementation of popular algorithms on datasets as just one line of code makes different algorithms available for you. Keras is minimalistic and straightforward with high-levels of extensibility, so it is easier to approach. Learning Resources:  Hands-on Machine Learning with TensorFlow [Video]  Hands-on Scikit-learn for Machine Learning [Video] If you have reached till here, it is time to put your learning into practice. Go ahead and create a simple linear regression model using some publicly available dataset in your area of interest. Kaggle, ourworldindata.org, UC Irvine Machine Learning repository, elitedatascience, all have a rich set of clean datasets in varied fields. Now, it is necessary to commit and put in daily efforts to practise these skills. Quora, Reddit, Medium, and stackoverflow will be your best friends when it comes to solving doubts regarding any of these skills. Data Helpers is another great resource that provides newcomers with help on queries regarding entering the ML field and related topics. Additionally, once you start getting hang of these skills, identify your strengths and interests, to realign your career goals. Research on the kind of work you want to put your newly gained Machine Learning skill to use. It needn’t be professional or serious, it just needs to be something that you deeply care about or are passionate about. This will pull you through your learning milestones, should you feel low at some point. Also, don’t forget to collaborate with other people and learn from them. You can work with web developers, software programmers, data analysts, data administrators, game developers etc. Finally, keep yourself updated with all the latest happenings in the ML world. Follow top experts and influencers on social media, top blogs on Machine Learning, and conferences. Once you are done checking off these steps off your list, you’ll be ready to start off with your ML project.                                                  Now, we’ll be looking at the most frequently asked questions by beginners in the field of Machine learning. Frequently asked questions by Beginners in ML As a beginner, it’s natural to have a lot of questions regarding ML. We’ll be addressing the top three frequently asked questions by beginners or non-programmers when it comes to Machine learning: Q.1 I am looking to make a career in Machine learning but I have no prior programming experience. Do I need to know programming for Machine learning? In a nutshell, Yes. If you want a career in Machine learning then having some form of programming knowledge really helps. As mentioned earlier in this article, learning a programming language can really help you with implementing ML algorithms. It also lets you know the internal mechanism behind Machine learning. So, having programming as a prior skill is great. Again, as mentioned before, you can get started with Python which is the easiest and the most common languages for ML. However, programming is just a part of Machine learning. For instance, “machine learning engineers” typically write more code than develop models, while “research scientists” work more on modelling and analyzing different models. Now, ML is based on the principles of statistical inference and for talking statistically to the computer, we need a language, there comes Coding. So, even though the nature of your job in ML might not require you to code as much, there’s still some amount of coding required. Read Also: Why is Python so good for AI and ML? 5 Python Experts Explain Top languages for Artificial Intelligence development Q.2 Are there any tools that can help me with Machine learning without touching a single line of code? Yes. With the rise of MLaaS (Machine learning as a service), there are certain tools that help you get started with machine learning right-away. These are especially useful for business applications of ML, such as predictive modelling and clustering. Read Also: How MLaaS is transforming cloud Some of the most popular ones are: BigML:  This cloud based web-service lets you upload your data, prepare it and run algorithms on it. It’s great for people with not so extensive data science backgrounds. It offers a clean and easy to use interfaces for configuring algorithms (decision trees) and reviewing the results. Being focused “only” on Machine Learning, it comes with a wide set of features, all well integrated within a usable Web UI. Other than that, it also offers an API so that if you like it you can build an application around it. Microsoft Azure: The Microsoft Azure ML studio is a “GUI-based integrated development environment for constructing and operationalizing Machine Learning workflow on Azure”. So, via an integrated development environment called ML Studio, people without data science background or non-programmers can also build data models with the help of drag-and-drop gestures and simple data flow diagrams. This also saves a lot of time through ML Studio's library of sample experiments. Learning resources: Microsoft Azure Machine Learning Machine Learning In The Cloud With Azure ML[Video] Orange: This is an open source machine learning and data visualization studio for novice and experts alike. It provides a toolbox comprising of text mining (topic modelling) and image recognition. It also offers a design tool for visual programming which allows you to connect together data preparation, algorithms, and result evaluation, thereby, creating machine learning “programs”. Apart from that, it provides over 100 widgets for the environment and there’s also a Python API and library available which you can integrate into your application. Amazon ML: Amazon ML is a part of Amazon Web Services ( AWS ) that combines powerful machine learning algorithms with interactive visual tools to guide you towards easily creating, evaluating, and deploying machine learning models. So, whether you are a data scientist or a newbie, it offers ML services and tools tailored to meet your needs and level of expertise. Building ML models using Amazon ML consists of three operations: data analysis, model training, and evaluation. Learning Resources: Effective Amazon Machine Learning Q.3  Do I need to know advanced mathematics ( college graduate level ) to learn Machine learning? It depends. As mentioned earlier, understanding of the following mathematical topics: Probability, Statistics and Linear Algebra can really make your machine learning journey easier and also help simplify your code. These help you understand the “why” behind the working of the machine learning algorithms, which is quite fundamental to understanding ML. However, not knowing advanced mathematics is not an excuse to not learning Machine Learning. There a lot of libraries which makes the task of applying an ML algorithm to solve a task easier. One such example is the widely used Python’s scikit-learn library. With scikit-learn, you just need one line of code and you’ll have the most common algorithms there for you, ready to be used. But, if you want to go deeper into machine learning then knowing advanced mathematics is a prerequisite as it will help you understand the algorithms, the formulas, how the learning is done and many other Machine Learning concepts. Also, with so many courses and tutorials online, you can always learn advanced mathematics on the side while exploring Machine learning. So, we looked at the three most asked questions by beginners in the field of Machine Learning. In the past, machine learning has provided us with self-driving cars, effective web search, speech recognition, etc. Machine learning is extremely pervasive, in fact, many researchers believe that ML is the best way to make progress towards human-level AI. Learning ML is not an easy task but its not next to impossible either. In the end, it all depends on the amount of dedication and efforts that you’re willing to put in to get a grasp of it. We just touched the tip of the iceberg in this article, there’s a lot more to know in Machine Learning which you will get a hang of as you get your feet dirty in it. That being said, all the best for the road ahead! Facebook launches a 6-part ML video series 7 of the best ML conferences for the rest of 2018 Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 16912

article-image-5-ways-artificial-intelligence-is-upgrading-software-engineering
Melisha Dsouza
02 Sep 2018
8 min read
Save for later

5 ways artificial intelligence is upgrading software engineering

Melisha Dsouza
02 Sep 2018
8 min read
47% of digitally mature organizations, or those that have advanced digital practices, said they have a defined AI strategy (Source: Adobe). It is estimated that  AI-enabled tools alone will generate $2.9 trillion in business value by 2021.  80% of enterprises are smartly investing in AI. The stats speak for themselves. AI clearly follows the motto “go big or go home”. This explosive growth of AI in different sectors of technology is also beginning to show its colors in software development. Shawn Drost, co-founder and lead instructor of coding boot camp ‘Hack Reactor’ says that AI still has a long way to go and is only impacting the workflow of a small portion of software engineers on a minority of projects right now. AI promises to change how organizations will conduct business and to make applications smarter. It is only logical then that software development, i.e., the way we build apps, will be impacted by AI as well. Forrester Research recently surveyed 25 application development and delivery (AD&D) teams, and respondents said AI will improve planning, development and especially testing. We can expect better software created under traditional environments. 5 areas of Software Engineering AI will transform The 5 major spheres of software development-  Software design, Software testing, GUI testing, strategic decision making, and automated code generation- are all areas where AI can help. A majority of interest in applying AI to software development is already seen in automated testing and bug detection tools. Next in line are the software design precepts, decision-making strategies, and finally automating software deployment pipelines. Let's take an in-depth look into the areas of high and medium interest of software engineering impacted by AI according to the Forrester Research report.     Source: Forbes.com #1 Software design In software engineering, planning a project and designing it from scratch need designers to apply their specialized learning and experience to come up with alternative solutions before settling on a definite solution. A designer begins with a vision of the solution, and after that retracts and forwards investigating plan changes until they reach the desired solution. Settling on the correct plan choices for each stage is a tedious and mistake-prone action for designers. Along this line, a few AI developments have demonstrated the advantages of enhancing traditional methods with intelligent specialists. The catch here is that the operator behaves like an individual partner to the client. This associate should have the capacity to offer opportune direction on the most proficient method to do design projects. For instance, take the example of AIDA- The Artificial Intelligence Design Assistant, deployed by Bookmark (a website building platform). Using AI, AIDA understands a users needs and desires and uses this knowledge to create an appropriate website for the user. It makes selections from millions of combinations to create a website style, focus, image and more that are customized for the user. In about 2 minutes, AIDA designs the first version of the website, and from that point it becomes a drag and drop operation. You can get a detailed overview of this tool on designshack. #2 Software testing Applications interact with each other through countless  APIs. They leverage legacy systems and grow in complexity everyday. Increase in complexity also leads to its fair share of challenges that can be overcome by machine-based intelligence. AI tools can be used to create test information, explore information authenticity, advancement and examination of the scope and also for test management. Artificial intelligence, trained right, can ensure the testing performed is error free. Testers freed from repetitive manual tests thus have more time to create new automated software tests with sophisticated features. Also, if software tests are repeated every time source code is modified, repeating those tests can be not only time-consuming but extremely costly. AI comes to the rescue once again by automating the testing for you! With AI automated testing, one can increase the overall scope of tests leading to an overall improvement of software quality. Take, for instance, the Functionize tool. It enables users to test fast and release faster with AI enabled cloud testing. The users just have to type a test plan in English and it will be automatically get converted into a functional test case. The tool allows one to elastically scale functional, load, and performance tests across every browser and device in the cloud. It also includes Self-healing tests that update autonomously in real-time. SapFix is another AI Hybrid tool deployed by Facebook which can automatically generate fixes for specific bugs identified by 'Sapienz'. It then proposes these fixes to engineers for approval and deployment to production.   #3 GUI testing Graphical User Interfaces (GUI) have become important in interacting with today's software. They are increasingly being used in critical systems and testing them is necessary to avert failures. With very few tools and techniques available to aid in the testing process, testing GUIs is difficult. Currently used GUI testing methods are ad hoc. They require the test designer to perform humongous tasks like manually developing test cases, identifying the conditions to check during test execution, determining when to check these conditions, and finally evaluate whether the GUI software is adequately tested. Phew! Now that is a lot of work. Also, not forgetting that if the GUI is modified after being tested, the test designer must change the test suite and perform re-testing. As a result, GUI testing today is resource intensive and it is difficult to determine if the testing is adequate. Applitools is a GUI tester tool empowered by AI. The Applitools Eyes SDK automatically tests whether visual code is functioning properly or not. Applitools enables users to test their visual code just as thoroughly as their functional UI code to ensure that the visual look of the application is as you expect it to be. Users can test how their application looks in multiple screen layouts to ensure that they all fit the design. It allows users to keep track of both the web page behaviour, as well as the look of the webpage. Users can test everything they develop from the functional behavior of their application to its visual look. #4 Using Artificial Intelligence in Strategic Decision-Making Normally, developers have to go through a long process to decide what features to include in a product. However, machine learning AI solution trained on business factors and past development projects can analyze the performance of existing applications and help both teams of engineers and business stakeholders like project managers to find solutions to maximize impact and cut risk. Normally, the transformation of business requirements into technology specifications requires a significant timeline for planning. Machine learning can help software development companies to speed up the process, deliver the product in lesser time, and increase revenue within a short span. AI canvas is a well known tool for Strategic Decision making.The canvas helps identify the key questions and feasibility challenges associated with building and deploying machine learning models in the enterprise. The AI Canvas is a simple tool that helps enterprises organize what they need to know into seven categories, namely- Prediction, Judgement, Action, Outcome, Input, Training and feedback. Clarifying these seven factors for each critical decision throughout the organization will help in identifying opportunities for AIs to either reduce costs or enhance performance.   #5 Automatic Code generation/Intelligent Programming Assistants Coding a huge project from scratch is often labour intensive and time consuming. An Intelligent AI programming assistant will reduce the workload by a great extent. To combat the issues of time and money constraints, researchers have tried to build systems that can write code before, but the problem is that these methods aren’t that good with ambiguity. Hence, a lot of details are needed about what the target program aims at doing, and writing down these details can be as much work as just writing the code. With AI, the story can be flipped. ”‘Bayou’- an A.I. based application is an Intelligent programming assistant. It began as an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com. Bayou follows a method called neural sketch learning. It trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associates this sketch with the “intent” that lies behind the program. This DARPA initiative aims at making programming easier and less error prone. Sounds intriguing? Now that you know how this tool works, why not try it for yourself on i-programmer.info. Summing it all up Software engineering has seen massive transformation over the past few years. AI and software intelligence tools aim to make software development easier and more reliable. According to a Forrester Research report on AI's impact on software development, automated testing and bug detection tools use AI the most to improve software development. It will be interesting to see the future developments in software engineering empowered with AI. I’m expecting faster, more efficient, more effective, and less costly software development cycles while engineers and other development personnel focus on bettering their skills to make advanced use of AI in their processes. Implementing Software Engineering Best Practices and Techniques with Apache Maven Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce
Read more
  • 0
  • 0
  • 17867
article-image-8-machine-learning-best-practices
Melisha Dsouza
02 Sep 2018
9 min read
Save for later

8 Machine learning best practices [Tutorial]

Melisha Dsouza
02 Sep 2018
9 min read
Machine Learning introduces a huge potential to reduce costs and generate new revenue in an enterprise. Application of machine learning effectively helps in solving practical problems smartly within an organization. Machine learning automates tasks that would otherwise need to be performed by a live agent. It has made drastic improvements in the past few years, but many a time, a machine needs the assistance of a human to complete its task. This is why it is necessary for organizations to learn best practices in machine learning which you will learn in this article today. This article is an excerpt from a book written by Chiheb Chebbi titled Mastering Machine Learning for Penetration Testing Feature engineering in machine learning Feature engineering and feature selection are essential to every modern data science product, especially machine learning based projects. According to research, over 50% of the time spent building the model is occupied by cleaning, processing, and selecting the data required to train the model. It is your responsibility to design, represent, and select the features. Most machine learning algorithms cannot work on raw data. They are not smart enough to do so. Thus, feature engineering is needed, to transform data in its raw status into data that can be understood and consumed by algorithms. Professor Andrew Ng once said: "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." Feature engineering is a process in the data preparation phase, according to the cross-industry standard process for data mining: The term Feature Engineering itself is not a formally defined term. It groups together all of the tasks for designing features to build intelligent systems. It plays an important role in the system. If you check data science competitions, I bet you have noticed that the competitors all use the same algorithms, but the winners perform the best feature engineering. If you want to enhance your data science and machine learning skills, I highly recommend that you visit and compete at www.kaggle.com: When searching for machine learning resources, you will face many different terminologies. To avoid any confusion, we need to distinguish between feature selection and feature engineering. Feature engineering transforms raw data into suitable features, while feature selection extracts necessary features from the engineered data. Featuring engineering is selecting the subset of all features, without including redundant or irrelevant features. Machine learning best practices Feature engineering enhances the performance of our machine learning system. We discuss some tips and best practices to build robust intelligent systems. Let's explore some of the best practices in the different aspects of machine learning projects. Information security datasets Data is a vital part of every machine learning model. To train models, we need to feed them datasets. While reading the earlier chapters, you will have noticed that to build an accurate and efficient machine learning model, you need a huge volume of data, even after cleaning data. Big companies with great amounts of available data use their internal datasets to build models, but small organizations, like startups, often struggle to acquire such a volume of data. International rules and regulations are making the mission harder because data privacy is an important aspect of information security. Every modern business must protect its users' data. To solve this problem, many institutions and organizations are delivering publicly available datasets, so that others can download them and build their models for educational or commercial use. Some information security datasets are as follows: The Controller Area Network (CAN) dataset for intrusion detection (OTIDS): http://ocslab.hksecurity.net/Dataset/CAN-intrusion-dataset The car-hacking dataset for intrusion detection: http://ocslab.hksecurity.net/Datasets/CAN-intrusion-dataset The web-hacking dataset for cyber criminal profiling: http://ocslab.hksecurity.net/Datasets/web-hacking-profiling The API-based malware detection system (APIMDS) dataset: http://ocslab.hksecurity.net/apimds-dataset The intrusion detection evaluation dataset (CICIDS2017): http://www.unb.ca/cic/datasets/ids-2017.html The Tor-nonTor dataset: http://www.unb.ca/cic/datasets/tor.html The Android adware and general malware dataset: http://www.unb.ca/cic/datasets/android-adware.html Use Project Jupyter The Jupyter Notebook is an open source web application used to create and share coding documents. I highly recommend it, especially for novice data scientists, for many reasons. It will give you the ability to code and visualize output directly. It is great for discovering and playing with data; exploring data is an important step to building machine learning models. Jupyter's official website is http://jupyter.org/: To install it using pip, simply type the following: python -m pip install --upgrade pip python -m pip install jupyter Speed up training with GPUs As you know, even with good feature engineering, training in machine learning is computationally expensive. The quickest way to train learning algorithms is to use graphics processing units (GPUs). Generally, though not in all cases, using GPUs is a wise decision for training models. In order to overcome CPU performance bottlenecks, the gather/scatter GPU architecture is best, performing parallel operations to speed up computing. TensorFlow supports the use of GPUs to train machine learning models. Hence, the devices are represented as strings; following is an example: "/device:GPU:0" : Your device GPU "/device:GPU:1" : 2nd GPU device on your Machine To use a GPU device in TensorFlow, you can add the following line: with tf.device('/device:GPU:0'): <What to Do Here> You can use a single GPU or multiple GPUs. Don't forget to install the CUDA toolkit, by using the following commands: Wget "http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.44-1_amd64.deb" sudo dpkg -i cuda-repo-ubuntu1604_8.0.44-1_amd64.deb sudo apt-get update sudo apt-get install cuda Install cuDNN as follows: sudo tar -xvf cudnn-8.0-linux-x64-v5.1.tgz -C /usr/local export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda Selecting models and learning curves To improve the performance of machine learning models, there are many hyper parameters to adjust. The more data that is used, the more errors that can happen. To work on these parameters, there is a method called GridSearchCV. It performs searches on predefined parameter values, through iterations. GridSearchCV uses the score() function, by default. To use it in scikit-learn, import it by using this line: from sklearn.grid_search import GridSearchCV Learning curves are used to understand the performance of a machine learning model. To use a learning curve in scikit-learn, import it to your Python project, as follows: from sklearn.learning_curve import learning_curve Machine learning architecture In the real world, data scientists do not find data to be as clean as the publicly available datasets. Real world data is stored by different means, and the data itself is shaped in different categories. Thus, machine learning practitioners need to build their own systems and pipelines to achieve their goals and train the models. A typical machine learning project respects the following architecture: Coding Good coding skills are very important to data science and machine learning. In addition to using effective linear algebra, statistics, and mathematics, data scientists should learn how to code properly. As a data scientist, you can choose from many programming languages, like Python, R, Java, and so on. Respecting coding's best practices is very helpful and highly recommended. Writing elegant, clean, and understandable code can be done through these tips: Comments are very important to understandable code. So, don't forget to comment your code, all of the time. Choose the right names for variables, functions, methods, packages, and modules. Use four spaces per indentation level. Structure your repository properly. Follow common style guidelines. If you use Python, you can follow this great aphorism, called the The Zen of Python, written by the legend, Tim Peters: "Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!" Data handling Good data handling leads to successfully building machine learning projects. After loading a dataset, please make sure that all of the data has loaded properly, and that the reading process is performing correctly. After performing any operation on the dataset, check over the resulting dataset. Business contexts An intelligent system is highly connected to business aspects because, after all, you are using data science and machine learning to solve a business issue or to build a commercial product, or for getting useful insights from the data that is acquired, to make good decisions. Identifying the right problems and asking the right questions are important when building your machine learning model, in order to solve business issues. In this tutorial, we had a look at somes tips and best practices to build intelligent systems using Machine Learning. To become a master at penetration testing using machine learning with Python,  check out this book  Mastering Machine Learning for Penetration Testing Why TensorFlow always tops machine learning and artificial intelligence tool surveys Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Tackle trolls with Machine Learning bots: Filtering out inappropriate content just got easy
Read more
  • 0
  • 0
  • 11394

article-image-12-ubiquitous-artificial-intelligence-powered-apps-that-are-changing-lives
Bhagyashree R
30 Aug 2018
11 min read
Save for later

12 ubiquitous artificial intelligence powered apps that are changing lives

Bhagyashree R
30 Aug 2018
11 min read
Artificial Intelligence is making it easier for people to do things every day. You can schedule your day, search for photos of loved ones, type emails on the go, or get things done with the virtual assistant. AI also provides innovative ways of tackling existing problems, from healthcare to advancing scientific discovery. According to Gartner’s Top 10 Strategic Technology Trends for 2018, the next few years will see every app, application, and service incorporating AI at some level. With major companies like Google, Amazon, IBM investing in AI and incorporating AI in their products, this statement, instead of a prediction is becoming a fact. Apple’s IPhone X comes with a Facial Recognition System, Samsung’s Bixby, Amazon’s Alexa, Google’s Google Assistant, and the recently launched Android Pie. Android Pie learns your preferences based on your usage patterns and gets better over time. It even provides you a breakdown of the time you spend on your phone. AI comes with endless possibilities, things that we used to dream of are now becoming a part of our day to day life. So, I have listed here, in no particular order, some of those innovative applications: Microsoft’s Seeing AI - Eye for the visually impaired Source: Microsoft Seeing AI is a perfect example of how technology is improving our lives. It is an intelligent camera app that uses computer vision to audibly help blind and visually impaired people to know about their surroundings. It comes with functionalities like reading out short text and documents for you, giving you description about a person, identifies currencies, colour, handwriting, light and even images in other apps using the device's camera. A data scientist named Anirudh Koul started this project (called Deep Vision earlier) to help his grandfather who was gradually losing his vision. Two breakthroughs by the Microsoft researchers facilitated him to further his idea: vision-to-language and image classification. To make the app this advance and real-time, they used the idea of making servers communicate with Microsoft Cognitive Services. This app brings in four technologies together to provide users with an array of functionalities: OCR, barcode scanner, facial recognition, and scene recognition. Check out this YouTube tutorial to understand how it works. Download App Store Ada - Healthcare in your hand Source: Digital Health Ada, with a very simple and conversational UI, helps you understand what could be wrong if you or someone you care about is not feeling well. Just like any doctor’s appointment, it starts with your basic details, then does an assessment, in which it asks several personalized questions related to the symptoms, and then gives a report. The report consists of a summary, possible causes, and less-likely causes. It also allows you to share the report as a PDF. After training over several years using real world cases, Ada has become a handy health advisor. Its platform is powered by a sophisticated Artificial Intelligence engine combined with large medical knowledge base covering many thousands of conditions, symptoms and findings. In every medical assessment, Ada takes all of a patient’s information into account, including past medical history, symptoms, risk factors and more. Using machine learning and multiple closed feedback loops, Ada becomes more intelligent. Download App Store Google Play Store Plume Air Report - An air pollution monitor Source: Plume Labs Blog Industrialization and urbanization definitely comes with their side effects, the main being air pollution. It has become inevitable to keep yourself safe from the pollution, but now at least you can be aware of the air pollution levels in your area. Plume Air Report forecasts how air quality will evolve hour by hour over the next 24 hours similar to weather forecast. You can also easily compare the air quality between cities. It gives you insight on all pollutants (PM2.5, PM10, O3, NO2), with absolute concentration levels and your local air quality scale. It uses machine learning and atmospheric sciences to deliver real-time and hourly forecast air quality data. First, latest pollution levels is collected from over 12,000 monitoring stations and 80 public agencies around the world and then filtered for errors. Local atmospheric data (wind, temperature, atmosphere, etc.) is sourced to track their influence on pollution levels in your city. A team of data scientists analyzes local specifics such as geographical features and human activities. Finally, AI algorithms and atmospheric models are developed that turn this giant amount of data into hourly forecasts. Download App Store Google Play Store Aura - Mindfulness meets AI Source: Popular Science In this fast life, slow down a little and give yourself a time out with Aura. Aura is a new kind of mindfulness app that learns about you and simplifies your learning through guided meditations. It helps in reducing stress and increases positivity through 3-minute meditations, personalized by Artificial Intelligence. Aura is an intelligent app that leverages machine learning to give you a unique experience. After every exercise, you can rate your experience and Aura will learn how to provide more tailored meditations according to your needs. You can even track your mood and learn your mood patterns. Download App Store Google Play Store Replika - An emotive chatbot as a friend for life Source: Medium Want to be friends with someone who is always there to listen to you, talk to you, and never judges you? Then Replika is for you! It helps you make a real connection with an unreal friend. The idea of building Replika came from a very tragic background. The founder of the software company, Luka, Eugenia Kudya, lost her best friend in an accident in November 2015. She used to go through their messenger texts to bring back their memories. This is how she got this idea to develop a chatbot making it learn from the sample texts sent by her best friend. In her own words, “Most of the companies try to build an app that talks, but we tried to build an app that could listen well”. The chatbot uses neural network facilitating more natural one-on-one conversation with its user, and over time, learn how to speak like them. The source code is freely available for developers under the name CakeChat. It comes with a pre-trained model that you can use as is to run a chatbot that maintains a conversation in a certain emotional state. You can also build a variety of other conversational agents by using your own dataset, for example, persona-based model, emotional chatting machine, topic-centric model. To know more about the background and evolution of Replika, check out this amazing YouTube video. Download App Store Google Play Store Google Assistant - Your personal Google Source: Google Assistant When talking of AI-powered apps, voice assistants probably come first in your mind. Google Assistant makes your life easier and helps in organizing your day better. You can manage your little tasks, plan your day, enjoy entertainment, and get answers. It can also sync to your other devices including Google Home, smart TVs, laptops, and more. To give users smart assistance, Google Assistant relies on Artificial Intelligence technologies such as natural language processing, natural language understanding, and machine learning to understand what the user is saying, and to make suggestions or act on that language input. Download App Store Google Play Store Hound - Say it, Get it Source: Android Apps In an array of virtual assistants to choose from, Hound understands your voice commands better. You do not need to give “search query” like commands and can have a more natural conversation. Hound can be used for variety of tasks, some of them are: search, discover, and play music, set alarms, timers, and reminders, call, text, navigate hands-free, get the weather forecast. Hound’s speed and accuracy comes from their powerful Houndify platform. This platform combines Speech Recognition and Natural Language Understanding into a single step, which is called Speech-to-Meaning. Download App Store Google Play Store Picai - An app that picks filters for your pics, keeping you looking your best always Source: Google Play Store Picai with the help of Artificial Intelligence, recommends picture-perfect filters by analyzing the scene. It automatically analyzes the scene and with the help of object recognition detects the type of the object, for example, a plant, a girl, etc. It then uses a proprietary deep learning model to recommend two optimum filters from 100+ filters. What makes this app stand out is the split-screen filter selection, which makes the filter selection easier for the users. When using this app be warned of the picture quality and app size (76 MB), but it is definitely worth trying! Download Google Play Store Microsoft Pix - The pro photographer Source: MSPoweuser Named one of the 50 Best Apps of the Year by Time Magazine, Microsoft Pix helps you take better photos without the extra effort! It solves the problem of “not living in the moment”. It comes with some amazing features like, hyperlapse, live images Microsoft Pix Comix, artistic styles to transform your photos, smart settings that automatically checks scene and lighting between each shutter tap, and updates settings between each shot, and more. Microsoft Pix uses Artificial Intelligence to improve the image, such as cropping edges, enhancing color and tone, and sharpening focus. It includes enhanced deep-learning capabilities around image understanding. It captures a burst of 10 frames with each shutter click and uses AI to select three best shots. Before the remaining photos are deleted, it uses data from the entire burst to remove noise. These best, enhanced images are ready in about a second. The app also detects whether your eyes are open or not using the facial recognition technology. Download App Store ELSA - Your machine learning English teacher Source: TechCrunch ELSA (English Language Speech Assistant)  helps you in learning English and bettering your pronunciation every day. It provides you a curriculum tailored just, regular feedback, progress tracking, common phrases used in daily life. You can practice in a relaxed environment and improve your speaking skills to prepare for the TOEFL, IELTS, TOEIC ELSA coaches you in improving your English pronunciations by using speech recognition, deep learning, and Artificial Intelligence. Download App Store Google Play Store Socratic - Homework in a snap Source: Google Play Store Socratic is your new helper, apart from your parents, in completing those complex Math problems. You just need to take a photo of your homework and can get explanations, videos, step-by-step help, instantly. Also, these resources are jargon-free, helping you understand the concepts better. It supports all subjects including Math (Algebra, Calculus, Statistics, Graphing, etc), Science, Chemistry, History, English, Economics, and more. Socratic uses Artificial Intelligence to figure out the concepts you need to learn in order to answer it. For this it combines cutting-edge computer vision technologies, which read questions from images, with machine learning classifiers. These classifiers are built using millions of sample homework questions, to accurately predict which concepts will help you solve your question. Download App Store Google Play Store Recent News - Stay informed Source: Recent News Recent News is an app that will provide you customized news. Some of the features that it comes with to give you the daily dose of news include one-minute news summary with very quick load time, hot news, local news, and personalized recommendations, instantly share news on Facebook, Twitter, and other social networks, and many more. It uses Artificial Intelligence to learn about your interests, suggest relevant articles, and propose topics you might like to follow. So, the more you use it the better it becomes! The app is surely innovative and saves time, but I do wish the developers applied some innovation in the app’s name as well :P Download App Store Google Play Store And that’s the end of my list. People say, “Smartphones and apps are becoming smarter, and we are becoming dumber”. But I would like to say that these apps, with the right usage, empower us to become smarter. Agree? 7 Popular Applications of Artificial Intelligence in Healthcare 5 examples of Artificial Intelligence in Web apps What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 0
  • 11287

article-image-nvidia-leads-the-ai-hardware-race-but-which-of-its-gpus-should-you-use-for-deep-learning
Prasad Ramesh
29 Aug 2018
8 min read
Save for later

NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning?

Prasad Ramesh
29 Aug 2018
8 min read
For readers who are new to deep learning and who might be wondering what a GPU is, let’s start there. To make it simple, consider deep learning as nothing more than a set of calculations - complex calculations, yes, but calculations nonetheless. To run these calculations, you need hardware. Ordinarily, you might just use a normal processor like the CPU inside your laptop. However, this isn’t powerful enough to process at the speed at which deep learning computations need to happen. GPUs, however, can. This is because while a conventional CPU has only a few complex cores, a GPU can have thousands of simple cores. With a GPU, training a deep learning data set can take just hours instead of days. However, although it’s clear that GPUs have significant advantages over CPUs, there is nevertheless a range of GPUs available, each having their own individual differences. Selecting one is ultimately a matter of knowing what your needs are. Let’s dig deeper and find out how to go about shopping for GPUs… What to look for before choosing a GPU? There are a few specifications to consider before picking a GPU. Memory bandwidth: This determines the capacity of a GPU to handle large amounts of data. It is the most important performance metric, as with faster memory bandwidth more data can be processed at higher speeds. Number of cores: This indicates how fast a GPU can process data. A large number of CUDA cores can handle large datasets well. CUDA cores are parallel processors similar to cores in a CPU but their number is in thousands and are not suited for complex calculations that a CPU core can perform. Memory size: For computer vision projects, it is crucial for memory size to be as much as you can afford. But with natural language processing, memory size does not play such an important role. Our pick of GPU devices to choose from The go to choice here is NVIDIA; they have standard libraries that make it simple to set things up. Other graphics cards are not very friendly in terms of the libraries supported for deep learning. NVIDIA CUDA Deep Neural Network library also has a good development community. “Is NVIDIA Unstoppable In AI?” -Forbes “Nvidia beats forecasts as sales of graphics chips for AI keep booming” -SiliconANGLE AMD GPUs are powerful too but lack library support to get things running smoothly. It would be really nice to see some AMD libraries being developed to break the monopoly and give more options to the consumers. NVIDIA RTX 2080 Ti: The RTX line of GPUs are to be released in September 2018. The RTX 2080 Ti will be twice as fast as the 1080 Ti. Price listed on NVIDIA website for founder’s edition is $1,199. RAM: 11 GB Memory bandwidth: 616 GBs/second Cores: 4352 cores @ 1545 MHz NVIDIA RTX 2080: This is more cost efficient than the 2080 Ti at a listed price of $799 on NVIDIA website for the founder’s edition. RAM: 8 GB Memory bandwidth: 448 GBs/second Cores: 2944 cores @ 1710 MHz NVIDIA RTX 2070: This is more cost efficient than the 2080 Ti at a listed price of $599 on NVIDIA website. Note that the other versions of the RTX cards will likely be cheaper than the founder’s edition around a $100 difference. RAM: 8 GB Memory bandwidth: 448 GBs/second Cores: 2304 cores @ 1620 MHz NVIDIA GTX 1080 Ti: Priced at $650 on Amazon. This is a higher end option but offers great value for money, and can also do well in Kaggle competitions. If you need more memory but cannot afford the RTX 2080 Ti go for this. RAM: 11 GB Memory bandwidth: 484 GBs/second Cores: 3584 cores @ 1582 MHz NVIDIA GTX 1080: Priced at $584 on Amazon. This is a mid-high end option only slightly behind the 1080Ti. VRAM: 8 GB Memory bandwidth: 320 GBs/second Processing power: 2560 cores @ 1733 MHz NVIDIA GTX 1070 Ti: Priced at around $450 on Amazon. This is slightly less performant than the GTX 1080 but $100 cheaper. VRAM: 8 GB Memory bandwidth: 256 GBs/second Processing power: 2438 cores @ 1683 MHz NVIDIA GTX 1070: Priced at $380 on Amazon is currently the bestseller because of crypto miners. Somewhat slower than the 1080 GPUs but cheaper. VRAM: 8 GB Memory bandwidth: 256 GBs/second Processing power: 1920 cores @ 1683 MHz NVIDIA GTX 1060 6GB: Priced at around $290 on Amazon. Pretty cheap but the 6 GB VRAM limits you. Should be good for NLP but you’ll find the performance lacking in computer vision. VRAM: 6 GB Memory bandwidth: 216 GBs/second Processing power: 1280 cores @ 1708 MHz NVIDIA GTX 1050 Ti: Priced at around $200 on Amazon. This is the cheapest workable option. Good to get started with deep learning and explore if you’re new. VRAM: 4 GB Memory bandwidth: 112 GBs/second Processing power: 768 cores @ 1392 MHz NVIDIA Titan XP: The Titan XP is also an option but gives only a marginally better performance while being almost twice as expensive as the GTX 1080 Ti, it has 12 GB memory, 547.7 GB/s bandwidth and 3840 cores @ 1582 MHz. On a side note, NVIDIA Quadro GPUs are pretty expensive and don’t really help in deep learning they are more of use in CAD and working with heavy graphics production tasks. The graph below does a pretty good job of visualizing how all the GPUs above compare: Source: Slav Ivanov Blog, processing power is calculated as CUDA cores times the clock frequency Does the number of GPUs matter? Yes, it does. But how many do you really need? What’s going to suit the scale of your project without breaking your budget? 2 GPUs will always yield better results than just one - but it’s only really worth it if you need the extra power. There are two options you can take with multi-GPU deep learning. On the one hand, you can train several different models at once across your GPUs, or, alternatively distribute one single training model across multiple GPUs known as  “multi-GPU training”. The latter approach is compatible with TensorFlow, CNTK, and PyTorch. Both of these approaches have advantages. Ultimately, it depends on how many projects you’re working on and, again, what your needs are. Another important point to bear in mind is that if you’re using multiple GPUs, the processor and hard disk need to be fast enough to feed data continuously - otherwise the multi-GPU approach is pointless. Source: NVIDIA website It boils down to your needs and budget, GPUs aren’t exactly cheap.   Other heavy devices There are also other large machines apart from GPUs. These include the specialized supercomputer from NVIDIA, the DGX-2, and Tensor processing units (TPUs) from Google. The NVIDIA DGX-2 If you thought GPUs were expensive, let me introduce you to NVIDIA DGX-2, the successor to the NVIDIA DGX-1. It’s a highly specialized workstation; consider it a supercomputer that has been specially designed to tackle deep learning. The price of the DGX-2 is (*gasp*) $399,000. Wait, what? I could buy some new hot wheels for that, or Dual Intel Xeon Platinum 8168, 2.7 GHz, 24-cores, 16 NVIDIA GPUs, 1.5 terabytes of RAM, and nearly 32 terabytes of SSD storage! The performance here is 2 petaFLOPS. Let’s be real: many of us probably won’t be able to afford it. However, NVIDIA does have leasing options, should you choose to try it. Practically speaking, this kind of beast finds its use in research work. In fact, the first DGX-1 was gifted to OpenAI by NVIDIA to promote AI research. Visit the NVIDIA website for more on these monster machines. There are also personal solutions available like the NVIDIA DGX Workstation. TPUs Now that you’ve caught your breath after reading about AI dream machines, let’s look at TPUs. Unlike the DGX machines, TPUs run on the cloud. A TPU is what’s referred to as an application-specific integrated circuit (ASIC) that has been designed specifically for machine learning and deep learning by Google. Here’s the key stats: Cloud TPUs can provide up to 11.5 petaflops of performance in a single pod. If you want to learn more, go to Google’s website. When choosing GPUs you need to weigh up your options The GTX 1080 Ti is most commonly used by researchers and competitively for Kaggle, as it gives good value for money. Go for this if you are sure about what you want to do with deep learning. The GTX 1080 and GTX 1070 Ti are cheaper with less computing power, a more budget friendly option if you cannot afford the 1080 Ti. GTX 1070 saves you some more money but is slower. The GTX 1060 6GB and GTX 1050 Ti are good if you’re just starting off in the world of deep learning without burning a hole in your pockets. If you must have the absolute best GPU irrespective of the cost then the RTX 2080 Ti is your choice. It offers twice the performance for almost twice the cost of a 1080 Ti. Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU” Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Nvidia’s Volta Tensor Core GPU hits performance milestones. But is it the best?
Read more
  • 0
  • 0
  • 15314
article-image-a-machine-learning-roadmap-for-web-developers
Sugandha Lahoti
27 Aug 2018
7 min read
Save for later

A Machine learning roadmap for Web Developers

Sugandha Lahoti
27 Aug 2018
7 min read
Now that you’ve opened this article, I’ll assume you’re a web developer who is all excited with the prospect of building a machine learning project. You may be here for one of these reasons. Either you have been in a circle of people who find web development is dying? (Is it really dying or just unwell?). Or maybe you are stagnating in your current trajectory. And so, you want to learn something different, something trending, something like Artificial Intelligence. Or you/your employer/your client is aware of the capabilities of machine learning and want to include it in some part of your web app to make it more powerful. Or like the majority of the folks, you just want to see first hand if all the fuss about artificial intelligence is really worth all the effort to switch gears, by building a side toy ML project. Either way, there are different approaches to fulfill these needs. Learning Machine Learning for the Web with Javascript Learning machine learning coming from a web development background comes with its own constraints. You might worry about having to learn entirely different concepts from scratch - from different algorithms to programming languages like Python to mathematical concepts like linear algebra, calculus, and statistics. However, chances are you can skip learning a new language. You probably know some Javascript in some form or the other thanks to your web development experience. As such, you can learn Machine Learning in JavaScript (You don’t have to learn another programming language from scratch) and take it right to your browsers with WebGL. There are some advantages to using JavaScript for ML. Its popularity is one; while ML in JavaScript is not as popular as Python’s ML ecosystem, at the moment, the language itself is. As demand for ML applications rises, and as hardware becomes faster and cheaper, it's only natural for machine learning to become more prevalent in the JavaScript world. The JavaScript ecosystem offers a rich set of libraries suited for most Machine Learning tasks. Math: math.js Data Analysis: d3.js Server: node.js (express, koa, hapi) Performance: Tensorflow.js (e.g. GPU accelerated via WebGL API in the browser), Keras.js etc. Read also: 5 JavaScript machine learning libraries you need to know BRIIM is a good collection of materials to get you started as web developer or JavaScript enthusiast in machine learning. In case you’re interested in learning Python instead of Javascript, here are the set of libraries you should pick. Math: numpy Data Analysis: Pandas Data Mining: PySpark Server: Flask, Django Performance: TensorFlow (because it is written with a Python API over a C/C++ engine) or Keras (sits on top of TensorFlow). Using Machine Learning as a service If you don’t want to spend your time learning frameworks, tools, and languages suited for machine learning, you can adopt Machine Learning as a service or MLaaS. These services provide machine learning tools as part of cloud computing services. So basically, you can benefit from machine learning without the allied cost, time and risk of establishing an in-house internal machine learning team. All you need is sufficient knowledge of incorporating APIs. All Machine Learning tasks including data pre-processing, model training, model evaluation, and predictions can be completed through MLaaS. Read also: How machine learning as a service is transforming cloud A large number of companies provide Machine Learning as a service. Most prominent ones include: Amazon Machine Learning Amazon ML makes it easy for web developers to build smart applications using simple APIs. This includes applications for fraud detection, demand forecasting, targeted marketing, and click prediction. They provide a Developer Guide, which provides a conceptual overview of Amazon ML and includes detailed instructions for using the service. They also have a API reference, which describes all the API operations and provides sample requests and responses for supported web service protocols. Azure ML web app templates The web app templates available in the Azure Marketplace can build a custom web app that knows your web service's input data and expected results. All you need to do is give the web app access to your web service and data, and the template does the rest. There are two available templates: Azure ML Request-Response Service Web App Template Azure ML Batch Execution Service Web App Template Each template creates a sample ASP.NET application by using the API URI and key for your web service. The template then deploys the application as a website to Azure. No coding is required to use these templates. You just supply the API key and URI, and the template builds the application for you. Google Cloud based APIs Google also provides machine learning services, with pre-trained models and a service to generate your own tailored models. Google’s Cloud AutoML is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs. Cloud AutoML is used by Disney on their website shopDisney to enhance guest experience through more relevant search results, expedited discovery, and product recommendations. Building Conversational Interfaces As a web developer, another thing you might be looking into, is developing conversational interfaces or chatbots to enhance your web apps. Amazon, Google, and Microsoft provide Machine learning powered tools to help developers with building their own chatbots. Amazon Lex You can embed chatbots in your web apps with the Amazon Lex featuring ASR (Automatic Speech Recognition) and NLP (Natural Language Processing) capabilities. The API can recognize written and spoken text and the Lex interface allows you to hook the recognized inputs to various back-end solutions. Lex currently supports deploying chatbots for Facebook Messenger, Slack, and Twilio. Google Dialogflow Google’s Dialogflow can build voice and text-based conversational interfaces, such as voice apps and chatbots, powered by AI. Dialogflow incorporates Google's machine learning expertise and products such as Google Cloud Speech-to-Text. The API can be tweaked and customized for needed intents using Java, Node.js, and Python. It is also available as an enterprise edition. Microsoft Azure Cognitive Services Microsoft Cognitive Services simplify a variety of AI-based tasks, giving you a quick way to add intelligence technologies to your bots with just a few lines of code. It provides tools and APIs for aiding the development of conversational interfaces. These include: Translator Speech API Bing Speech API to convert text into speech and speech into text Speaker Recognition API for voice verification tasks Custom Speech Service to apply Azure NLP capacities using own data and models Language Understanding Intelligent Service (LUIS) is an API that analyzes intentions in text to be recognized as commands Text Analysis API for sentiment analysis and defining topics Bing Spell Check Translator Text API Web Language Model API that estimates probabilities of words combinations and supports word autocompletion Linguistic Analysis API used for sentence separation, tagging the parts of speech, and dividing texts into labeled phrases Read also: Top 4 chatbot development frameworks for developers These tools should be enough to get your feet off the ground quickly and move into the specific area of machine learning. Ultimately your choice of tool relies on the kind of application you want to build, your level of expertise, and how much time and effort you’re willing to put to learn. Obviously, depending on your area of choice, you would have to do more research and develop yourself in those areas. How should web developers learn machine learning? 5 examples of Artificial Intelligence in Web apps The most valuable skills for web developers to learn in 2018
Read more
  • 0
  • 0
  • 13348

article-image-tensorflow-always-tops-machine-learning-artificial-intelligence-tool-surveys
Sunith Shetty
23 Aug 2018
9 min read
Save for later

Why TensorFlow always tops machine learning and artificial intelligence tool surveys

Sunith Shetty
23 Aug 2018
9 min read
TensorFlow is an open source machine learning framework for carrying out high-performance numerical computations. It provides excellent architecture support which allows easy deployment of computations across a variety of platforms ranging from desktops to clusters of servers, mobiles, and edge devices. Have you ever thought, why TensorFlow has become so popular in such a short span of time? What made TensorFlow so special, that we seeing a huge surge of developers and researchers opting for the TensorFlow framework? Interestingly, when it comes to artificial intelligence frameworks showdown, you will find TensorFlow emerging as a clear winner most of the time. The major credit goes to the soaring popularity and contributions across various forums such as GitHub, Stack Overflow, and Quora. The fact is, TensorFlow is being used in over 6000 open source repositories showing their roots in many real-world research and applications. How TensorFlow came to be The library was developed by a group of researchers and engineers from the Google Brain team within Google AI organization. They wanted a library that provides strong support for machine learning and deep learning and advanced numerical computations across different scientific domains. Since the time Google open sourced its machine learning framework in 2015, TensorFlow has grown in popularity with more than 1500 projects mentions on GitHub. The constant updates made to the TensorFlow ecosystem is the real cherry on the cake. This has ensured all the new challenges developers and researchers face are addressed, thus easing the complex computations and providing newer features, promises, and performance improvements with the support of high-level APIs. By open sourcing the library, the Google research team have received all the benefits from a huge set of contributors outside their existing core team. Their idea was to make TensorFlow popular by open sourcing it, thus making sure all new research ideas are implemented in TensorFlow first allowing Google to productize those ideas. Read Also: 6 reasons why Google open sourced TensorFlow What makes TensorFlow different from the rest? With more and more research and real-life use cases going mainstream, we can see a big trend among programmers, and developers flocking towards the tool called TensorFlow. The popularity for TensorFlow is quite evident, with big names adopting TensorFlow for carrying out artificial intelligence tasks. Many popular companies such as NVIDIA, Twitter, Snapchat, Uber and more are using TensorFlow for all their major operations and research areas. On one hand, someone can make a case that TensorFlow’s popularity is based on its origins/legacy. TensorFlow being developed under the house of “Google” enjoys the reputation of the household name. There’s no doubt, TensorFlow has been better marketed than some of its competitors. Source: The Data Incubator However that’s not the full story. There are many other compelling reasons why small scale to large scale companies prefer using TensorFlow over other machine learning tools TensorFlow key functionalities TensorFlow provides an accessible and readable syntax which is essential for making these programming resources easier to use. The complex syntax is the last thing developers need to know given machine learning’s advanced nature. TensorFlow provides excellent functionalities and services when compared to other popular deep learning frameworks. These high-level operations are essential for carrying out complex parallel computations and for building advanced neural network models. TensorFlow is a low-level library which provides more flexibility. Thus you can define your own functionalities or services for your models. This is a very important parameter for researchers because it allows them to change the model based on changing user requirements. TensorFlow provides more network control. Thus allowing developers and researchers to understand how operations are implemented across the network. They can always keep track of new changes done over time. Distributed training The trend of distributed deep learning began in 2017, when Facebook released a paper showing a set of methods to reduce the training time of a convolutional neural network model. The test was done on RESNET-50 model on ImageNet dataset which took one hour to train instead of two weeks. 256 GPUs spread over 32 servers were used. This revolutionary test has open the gates for many research work which have massively reduced the experimentation time by running many tasks in parallel on multiple GPUs. Google’s distributed TensorFlow has allowed all the researchers and developers to scale out complex distributed training using in-built methods and operations that optimizes distributed deep learning among servers. . Google’s distributed TensorFlow engine which is part of the regular TensorFlow repo, works exceptionally well with the existing TensorFlow’s operations and functionalities. It has allowed exploring two of the most important distributed methods: Distribute the training time of a neural network model over many servers to reduce the training time. Searching for good hyperparameters by running parallel experiments over multiple servers. Google has given distributed TensorFlow engine the required power to steal the share of the market acquired by other distributed projects such as Microsoft’s CNTK, AMPLab's SparkNet, and CaffeOnSpark. Even though the competition is tough, Google has still managed to become more popular when compared to the other alternatives in the market. From research to production Google has, in some ways, democratized deep learning., The key reason is TensorFlow’s high-level APIs making deep learning accessible to everyone. TensorFlow provides pre-built functions and advanced operations to ease the task of building different neural network models. It provides the required infrastructure and hardware which makes them one of the leading libraries used extensively by researchers and students in the deep learning domain. In addition to research tools, TensorFlow extends the services by bringing the model in production using TensorFlow Serving. It is specifically designed for production environments, which provides a flexible, high-performance serving system for machine learning models. It provides all the functionalities and operations which makes it easy to deploy new algorithms and experiments as per changing requirements and preferences. It provides an excellent feature of out-of-the-box integration with TensorFlow models which can be easily extended to serve other types of models and data. TensorFlow’s API is a complete package which is easier to use and read, plus provides helpful operators, debugging and monitoring tools, and deployment features. This has led to growing use of TensorFlow library as a complete package within the ecosystem by the emerging body of students, researchers, developers, production engineers from various fields who are gravitating towards artificial intelligence. There is a TensorFlow for web, mobile, edge, embedded and more TensorFlow provides a range of services and modules within their existing ecosystem making them as one of the ground-breaking end-to-end tools to provide state-of-the-art deep learning. TensorFlow.js for machine learning on the web JavaScript library for training and deploying machine learning models in the browser. This library provides flexible and intuitive APIs to build and train new and pre-existing models from scratch right in the browser or under Node.js. TensorFlow Lite for mobile and embedded ML It is a TensorFlow lightweight solution used for mobile and embedded devices. It is fast since it enables on-device machine learning inference with low latency. It supports hardware acceleration with the Android Neural Networks API. The future releases of TensorFlow Lite will bring more built-in operators, performance improvements, and will support more models to simplify the developer’s experience of bringing machine learning services within mobile devices. TensorFlow Hub for reusable machine learning A library which is used extensively to reuse machine learning models. Thus you can transfer learning by reusing parts of machine learning models. TensorBoard for visual debugging While training a complex neural network model, the computations you use in TensorFlow can be very confusing. TensorBoard makes it very easy to understand and debug your TensorFlow programs in the form of visualizations. It allows you to easily inspect and understand your TensorFlow runs and graphs. Sonnet Sonnet is a DeepMind library which is built on top of TensorFlow extensively used to build complex neural network models. All of this factors have made the TensorFlow library immensely appealing for building a wide spectrum of machine learning and deep learning projects. This tool has become a preferred choice for everyone from space research giant NASA and other confidential government agencies, to an impressive roster of private sector giants. Road Ahead for TensorFlow TensorFlow no doubt is better marketed compared to the other deep learning frameworks. The community appears to be moving very fast. In any given hour, there are approximately 10 people around the world contributing or improving the TensorFlow project on GitHub. TensorFlow dominates the field with the largest active community. It will be interesting to see what new advances TensorFlow and other utilities make possible for the future of our digital world. Continuing the recent trend of rapid updates, the TensorFlow team is making sure they address all the current and active challenges faced by the contributors and the developers while building machine learning and deep learning models. TensorFlow 2.0 will be a major update, we can expect the release candidate by next year early March. The preview version of this major milestone is expected to hit later this year. The major focus will be on ease of use, additional support for more platforms and languages, and eager execution will be the central feature of TensorFlow 2.0. This breakthrough version will add more functionalities and operations to handle current research areas such as reinforcement learning, GANs, building advanced neural network models more efficiently. Google will continue to invest and upgrade their existing TensorFlow ecosystem. According to Google’s CEO, Sundar Pichai “artificial intelligence is more important than electricity or fire.” TensorFlow is the solution they have come up with to bring artificial intelligence into reality and provide a stepping stone to revolutionize humankind. Read more The 5 biggest announcements from TensorFlow Developer Summit 2018 The Deep Learning Framework Showdown: TensorFlow vs CNTK Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence
Read more
  • 0
  • 0
  • 14589