Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Artificial Intelligence

170 Articles
article-image-13-reasons-exit-polls-wrong
Sugandha Lahoti
13 Nov 2017
7 min read
Save for later

13 reasons why Exit Polls get it wrong sometimes

Sugandha Lahoti
13 Nov 2017
7 min read
An Exit poll, as the name suggests, is a poll taken immediately after voters exit the polling booth. Private companies working for popular newspapers or media organizations conduct these exit polls and are popularly known as pollsters. Once the data is collected, data analysis and estimation is used to predict the winning party and the number of seats captured. Turnout models which are built using logistic regression or random forest techniques are used for prediction of turnouts in the exit poll results. Exit polls are dependent on sampling. Hence a margin of error does exist. This describes how close pollsters are in expecting an election result relative to the true population value. Normally, a margin of error plus or minus 3 percentage points is acceptable. However, in the recent times, there have been instances where the poll average was off by a larger percentage. Let us analyze some of the reasons why exit polls can get their predictions wrong. [dropcap]1[/dropcap] Sampling inaccuracy/quality Exit polls are dependent on the sample size, i.e. the number of respondents or the number of precincts chosen. Incorrect estimation of this may lead to error margins. The quality of sample data also matters. This includes factors such as whether the selected precincts are representative of the state, whether the polled audience in each precinct represents the whole etc. [dropcap]2[/dropcap] Model did not consider multiple turnout scenarios Voter turnout refers to the percentage of voters who cast a vote during an election. Pollsters may often misinterpret the number of people who actually vote based on the total no. of the population eligible to vote. Also, they often base their turnout prediction on past trends. However, voter turnout is dependent on many factors. For example, some voters might not turn up due to reasons such as indifference or a feeling of perception that their vote might not count--which is not true. In such cases, the pollsters adjust the weighting to reflect high or low turnout conditions by keeping the total turnout count in mind. The observations taken during a low turnout is also considered and the weights are adjusted therein. In short, pollsters try their best to maintain the original data. [dropcap]3[/dropcap] Model did not consider past patterns Pollsters may commit a mistake by not delving into the past. They can gauge the current turnout rates by taking into the account the presidential turnout votes or the previous midterm elections. Although, one may assume that the turnout percentage over the years have been stable a check on the past voter turnout is a must. [dropcap]4[/dropcap] Model was not recalibrated for year and time of election such as odd-year midterms Timing is a very crucial factor in getting the right traction for people to vote. At times, some social issues would be much more hyped and talked-about than the elections. For instance, the news of the Ebola virus breakout in Texas was more prominent than news about the contestants standing in the mid 2014 elections. Another example would be an election day set on a Friday versus on any other weekday. [dropcap]5[/dropcap] Number of contestants Everyone has a personal favorite. In cases where there are just two contestants, it is straightforward to arrive at a clear winner. For pollsters, it is easier to predict votes when the whole world's talking about it, and they know which candidate is most talked about. With the increase in the number of candidates, the task to carry out an accurate survey is challenging for the pollsters. They have to reach out to more respondents to carry out the survey required in an effective manner. [dropcap]6[/dropcap] Swing voters/undecided respondents Another possible explanation for discrepancies in poll predictions and the outcome is due to a large proportion of undecided voters in the poll samples. Possible solutions could be Asking relative questions instead of absolute ones Allotment of undecided voters in proportion to party support levels while making estimates [dropcap]7[/dropcap] Number of down-ballot races Sometimes a popular party leader helps in attracting votes to another less popular candidate of the same party. This is the down-ballot effect. At times, down-ballot candidates may receive more votes than party leader candidates, even when third-party candidates are included. Also, down-ballot outcomes tend to be influenced by the turnout for the polls at the top of the ballot. So the number of down-ballot races need to be taken into account. [dropcap]8[/dropcap] The cost incurred to commission a quality poll A huge capital investment is required in order to commission a quality poll. The cost incurred for a poll depends on the sample size, i.e. the number of people interviewed, the length of the questionnaire--longer the interview, more expensive it becomes, the time within which interviews must be conducted, are some contributing factors. Also, if a polling firm is hired or if cell phones are included to carry out a survey, it will definitely add up to the expense. [dropcap]9[/dropcap] Over-relying on historical precedence Historical precedence is an estimate of the type of people who have shown up previously on a similar type of election. This precedent should also be taken into consideration for better estimation of election results. However, care should be taken not to over-rely on it. [dropcap]10[/dropcap] Effect of statewide ballot measures Poll estimates are also dependent on state and local governments. Certain issues are pushed by local ballot measures. However, some voters feel that power over specific issues should belong exclusively to state governments. This causes opposition to local ballot measures in some states. These issues should be taken into account while estimation for better result prediction. [dropcap]11[/dropcap] Oversampling due to various factors such as faulty survey design, respondents’ willingness/unwillingness to participate etc   Exit polls may also sometimes oversample voters for many reasons. One example of this is related to the people of US with cultural ties to Latin America. Although, more than one-fourth of Latino voters prefer speaking Spanish to English, yet exit polls are almost never offered in Spanish. This might oversample English speaking Latinos. [dropcap]12[/dropcap] Social desirability bias in respondents People may not always tell the truth about who they voted for. In other words, when asked by pollsters they are likely to place themselves on the safer side, as exit polls is a sensitive topic. The voters happen to tell pollsters that they have voted for a minority candidate, but they have actually voted against the minority candidate. Social Desirability has no linking to issues with race or gender. It is just that people like to be liked and like to be seen as doing what everyone else is doing or what the “right” thing to do is, i.e., they play safe. Brexit polling, for instance, showed stronger signs of Social desirability bias. [dropcap]13[/dropcap] The spiral of silence theory People may not reveal their true thoughts to news reporters as they may believe media has an inherent bias. Voters may not come out to declare their stand publicly in fear of reprisal or the fear of isolation. They choose to remain silent. This may also hinder estimate calculation for pollsters. The above is just a shortlist of a long list of reasons why exit poll results must be taken with a pinch of salt. However, even with all its shortcomings, the striking feature of an exit poll is the fact that rather than predicting about a future action, it records an action that has just happened. So you rely on present indicators rather than ambiguous historical data. Exit polls are also cost-effective in obtaining very large samples. If these exit polls are conducted properly, keeping in consideration the points described above, they can predict election results with greater reliability.
Read more
  • 0
  • 0
  • 3474

article-image-know-customer-envisaging-customer-sentiments-using-behavioral-analytics
Sugandha Lahoti
13 Nov 2017
6 min read
Save for later

Know Your Customer: Envisaging customer sentiments using Behavioral Analytics

Sugandha Lahoti
13 Nov 2017
6 min read
“All the world’s a stage and the men and women are merely players.” Shakespeare may have considered men and women as mere players, but as large number of users are connected with smart devices and the online world, these men, and women—your customers—become your most important assets. Therefore, knowing your customer and envisaging their sentiments using Behavioral Analytics has become paramount. Behavioral analytics: Tracking user events Say, you order a pizza through an app on your phone. After customizing and choosing the crust size, type and ingredients, you land in the payment section. Suppose, instead of paying, you abandon the order altogether. Immediately you get an SMS and an email, alerting you that you are just a step away from buying your choice of pizza. So how does this happen? Behavior analytics runs in the background here. By tracking user navigation, it prompts the user to complete an order, or offer a suggestion. The rise of smart devices has enabled almost everything to transmit data. Most of this data is captured between sessions of user activity and is in the raw form. By user activity we mean social media interactions, amount of time spent on a site, user navigation path, click activity of a user, their responses to change in the market, purchasing history and much more. Some form of understanding is therefore required to make sense of this raw and scrambled data and generate definite patterns. Here’s where behavior analytics steps in. It goes through a user's entire e-commerce journey and focuses on understanding the what and how of their activities. Based on this, it predicts their future moves. This, in turn, helps to generate opportunities for businesses to become more customer-centric. Why Behavioral analytics over traditional analytics The previous analytical tools lacked a single architecture and simple workflow. Although they assisted with tracking clicks and page loads, they required a separate data warehouse and visualization tools. Thus, creating an unstructured workflow. Behavioral Analytics go a step beyond standard analytics by combining rule-based models with deep machine learning. Where the former tells what the users do, the latter reveals the how and why of their actions. Thus, they keep track of where customers click, which pages are viewed, how many continue down the process, who eliminates a website at what step, among other things. Unlike traditional analytics, behavioral analytics is an aggregator of data from diverse sources (websites, mobile apps, CRM, email marketing campaigns etc.) collected across various sessions. Cloud-based behavioral analytic platforms can intelligently integrate and unify all sources of digital communication into a complete picture. Thus, offering a seamless and structured view of the entire customer journey. Such behavioral analytic platforms typically capture real-time data which is in raw format. They then automatically filter and aggregate this data into a structured dataset. It also provides visualization tools to see and observe this data, all the while predicting trends. The aggregation of data is done in such a way that it allows querying this data in an unlimited number of ways for the business to utilize. So, they are helpful in analyzing retention and churn trends, trace abnormalities, perform multidimensional funnel analysis and much more. Let’s look at some specific use cases across industries where behavioral analytics is highly used. Analysing customer behavior in E-commerce E-commerce platforms are on the top of the ladder in the list of sectors, which can largely benefit by mapping their digital customer journey. Analytic strategies can track if a customer spends more time on a product page X over product page Y by displaying views and data pointers of customer activity in a structured format. This enables industries to resolve issues, which may hinder a page’s popularity, including slow loading pages, expensive products etc. By tracking user session, right from when they entered a platform to the point a sale is made, behavior analytics predicts future customer behavior and business trends. Some of the parameters considered include number of customers viewing reviews and ratings before adding an item to their cart, what similar products the customer sees, how often the items in the cart are deleted or added etc. Behavioral analytics can also identify top-performing products and help in building powerful recommendation engines. By analyzing changes in customer behavior over different demographical conditions or on the basis of regional differences.This helps achieve customer-to-customer personalization. KISSmetrics is a powerful analytics tool that provides detailed customer behavior information report for businesses to slice through and find meaningful insights. RetentionGrid provides color-coded visualizations and also provides multiple strategies tailormade for customers, based on customer segmentation and demographics.   How can online gaming benefit from behavioral analysis Online gaming is a surging community with millions of daily active users. Marketers are always looking for ways to acquire customers and retain users. Monetization is another important focal point. This means not only getting more users to play but also to pay. Behavioral analytics keeps track of a user’s gaming session such as skill levels, amount of time spent at different stages, favorite features and activities within game-play, and drop-off points from the game. At an overall level, it tracks the active users, game logs, demographic data and social interaction between players over various community channels. On the basis of this data, a visualization graph is generated which can be used to drive market strategies such as identifying features that work, how to add additional players, or how to keep existing players engaged. Thus helping increase player retention and assisting game developers and marketers implement new versions based on player’s reaction. behavior analytics can also identify common characteristics of users. It helps in understanding what gets a user to play longer and in identifying the group of users most likely to pay based on common characteristics. All these help gaming companies implement right advertising and placement of content to the users. Mr Green’s casino launched a Green Gaming tool to predict a person’s playing behavior and on the basis of a gamer’s risk-taking behavior, they help generate personalized insights regarding their gaming. Nektan PLC has partnered with ‘machine learning’ customer insights firm Newlette. Newlette models analyze player behavior based on individual playing styles. They help in increasing player engagement and reduce bonus costs by providing the players with optimum offers and bonuses. The applications of behavioral analytics are not just limited to e-commerce or gaming alone. The security and surveillance domain uses behavioral analytics for conducting risk assessment of organizational resources and alerting against individual entities that are a potential threat. They do so by sifting through large amounts of company data and identifying patterns that portray irregularity or change. End-to-end monitoring of customer also helps app developers track customer adoption to new-feature development. It could also provide reports on the exact point where customers drop off and help in avoiding expensive technical issues. All these benefits highlight how customer tracking and knowing user behavior is an essential tool to drive a business forward. As Leo Burnett, the founder of a prominent advertising agency says “What helps people, helps business.”
Read more
  • 0
  • 1
  • 2857

article-image-nlp-deep-learning
Savia Lobo
10 Nov 2017
7 min read
Save for later

Facelifting NLP with Deep Learning

Savia Lobo
10 Nov 2017
7 min read
Over the recent years, the world has witnessed a global move towards digitization. Massive improvements in computational capabilities have been made; thanks to the boom in the AI chip market as well as computation farms. These have resulted in data abundance and fast data processing ecosystems which are accessible to everyone - important pillars for the growth of AI and allied fields. Terms such as ‘Machine learning’ and ‘Deep learning’ in particular have gained a lot of traction in the data science community, mainly because of the multitude of domains they lend themselves to. Along with image processing, computer vision and games, one key area transformed by machine learning, and more recently by deep learning, is Natural Language Processing, simply known as NLP. Human language is a heady concoction of otherwise incoherent words and phrases with more exceptions than rules, full of jargons and words with different meanings. Making machines comprehend a human language in all its glory, not to mention its users’ idiosyncrasies, can be quite a challenge. Then there is the matter of there being thousands of languages, dialects, accents, slangs and what not. Yet, it is a challenge worth taking up - mainly because language finds its application in almost everything humans do - from web search to e-mails to content curation, and more. According to Tractica, a market intelligence firm, “Natural Language Processing market will reach $22.3 Billion by 2025.” NLP Evolution - From Machine Learning to Deep Learning Before deep learning embraced NLP into a smarter version of a conversational machine, machine learning based NLP systems were utilized to process natural language. Machine learning based NLP systems were trained on models which were shallow in nature as they were often based on incomplete and time-consuming custom-made features. They included algorithms such as support vector machines (SVM) and logistic regression. These models found their applications in tasks such as spam detection in emails, grouping together similar words in a document, spin articles, and much more. ML-based NLP systems relied heavily on the quality of the training data. Because of the limited nature of the capabilities offered by machine learning, when it came to understanding high-level texts and speech outputs from humans, the classical NLP model fell short. This led to the conclusion that machine learning algorithms can handle only narrow features and as such cannot perform high-level reasoning, which human conversations often comprise of. Also, as the scale of the data grew, machine learning couldn’t be an effective tool to tackle the different NLP problems related to efficiently training the models and their optimization. Here’s where deep learning proves to be a stepping stone. Deep learning includes Artificial Neural Networks (ANNs) that function similar to neural nerves in a human brain, a reason why they are considered to emulate human thinking remarkably. Deep learning models perform significantly better as the quantity of data fed to them increases. For instance, Google’s Smart Reply can generate relevant responses to the emails received by the user. This system uses a pair of  RNNs, one to encode the incoming mail and the other to predict relevant responses. With the incorporation of DL in NLP, the need for feature engineering is highly reduced, saving time - a major asset. This means machines can be trained to understand languages other than English without complex and custom feature engineering by applying deep neural network models. In spite of the constant upgrades happening to language, the quest to get machines more and more friendly to humans is made possible using deep learning.      Key Deep Learning techniques used for NLP NLP-based deep learning models make use of word-embeddings, pre-trained using a large corpus or collection of unlabeled data. With advancements in word embedding techniques, the ability of the machines to derive deeper insights from languages has increased. To do so, NLP uses a technique called Word2vec that converts a given word into a vector for the better understanding of the machines. Continuous-bag-of words and skip-gram models - models used for learning word vectors, help in capturing the sequential patterns within sentences. The latter predicts the outside words using the center word as an input and is used in large datasets whereas the former does the vice versa. Similarly, GloVe also computes vector representations but using a technique called matrix factorization. A disadvantage of the word embedding approach is that it cannot understand phrases and sentences. As mentioned earlier, the bag-of-words model converts each word into a corresponding vector. This can simplify many problems but it can also change the context of the text. For instance, it may not collectively understand the use of idioms or sub-phrases such as “Break a leg”. Also, recognizing indicative or negative words such as ‘not’, ‘but’, that attaches a semantical meaning to a word is difficult for the model to understand. A solution to this would be using ‘negative sampling’, i.e., a frequency-based sampling of negative terms while training the word2vec model. This is where neural networks can come into play. CNNs (Convolutional Neural Networks)  and RNNs (Recurrent Neural Networks) are the two widely used neural network models in NLP. CNNs are good performers for text classification. However, the downside is that they are poor in learning the sequential information from the text. Expresso, built on Caffe, is one of the many tools used to develop CNNs. RNNs are preferred over CNNs for NLP as they allow sequential processing. For example, an RNN can differentiate between the words ‘fan’ and ‘fan-following’. This means RNNs are better equipped to handle complex dependencies and unbounded texts. Also, unlike CNNs, RNNs can handle input context of arbitrary length because of its flexible computational steps. All the above highlight why RNNs have better modeling potential than CNNs as far NLP is concerned. Although RNNs are the preferred choice, they have a limitation: The vanishing gradient problem. This problem can be solved using LSTM (Long-short term memory), which helps in understanding the association of words within a text, and back-propagates an error through unlimited steps. LSTM includes a forget gate, which forgets the learned weights if carrying it forward is negligible. Thus, long-term dependencies are reduced. Other than LSTM, GRU (Gated Recurrent Units) is also widely opted to solve the vanishing gradient problem. Current Implementations Deep Learning is good at identifying patterns within unstructured data. Social Media is a major dump of unstructured media content - a goldmine for human sentiment analysis. Facebook uses DeepText, a Deep Learning based text understanding engine, which can understand the textual content of thousands of posts with near-human accuracy. CRM systems strive to maximize customer lifetime value by understanding what customers want and then taking appropriate measures. TalkIQ, uses neural-network based text analysis and deep learning models to extract meaning from the conversations that organizations have with their customers in order to gain deeper insights in real-time. Google’s Cloud Speech API helps convert audio to texts; it can also recognize audio in 110 languages. Other implementations include Automated Text Summarization for summarizing the concept within a huge document, Speech Processing for converting voice requests into search recommendations, and much more. Many other areas such as fraud detection tools, UI/UX, IoT devices, and more, that make use of speech and text analytics can perform explicitly well by imbibing deep learning neural network models. The future of NLP with Deep Learning With the advancements in deep learning, machines will be able to understand human communication in a much more comprehensive way. They will be able to extract complex patterns and relationships and decipher the variations and ambiguities in various languages. This will find some interesting use-cases - smarter chatbots being a very important one. Understanding complex and longer customer queries and giving out accurate answers are what we can expect from these chatbots in the near future. The advancements in NLP and deep learning could also lead to the development of expert systems which perform smarter searches, allowing the applications to search for content using informal, conversational language. Understanding and interpreting unindexed unstructured information, which is currently a challenge for NLP, is something that is possible as well. The possibilities are definitely there - how NLP evolves by blending itself with the innovations in Artificial Intelligence is all that remains to be seen.
Read more
  • 0
  • 0
  • 4846
Banner background image

article-image-6-use-cases-machine-learning-healthcare
Sugandha Lahoti
10 Nov 2017
7 min read
Save for later

6 use cases of Machine Learning in Healthcare

Sugandha Lahoti
10 Nov 2017
7 min read
While hospitals have sophisticated processes and highly skilled administrators, management can still be an administrative nightmare for the already time-starved healthcare professionals. A sprinkle of automation can do wonders here. It could free up a practitioner's invaluable time and, thereby, allow them to focus on tending to critically ill patients and complex medical procedures. At the most basic level, machine learning can mechanize routine tasks such as automating documentation, billing, and regulatory processes. It can also provide ways and tools to diagnose and treat patients more efficiently. However, these tasks only scratch the surface. Machine learning is here to revolutionize healthcare and other allied industries such as pharma and medicine. Below are some ways it is being put to use in these domains. Helping with disease identification and drug discovery Healthcare systems generate copious amounts of data and use them for disease prediction. However, the necessary software to generate meaningful insights from this unstructured data is often not in place. Hence, drug and disease discovery end up taking time. Machine Learning algorithms can discover signatures of diseases at rapid rates by allowing systems to learn and make predictions based on the previously processed data. They can also be used to determine which chemical compounds could work together to aid drug discovery. Thus the time-consuming process of experimenting and testing millions of compounds is eliminated. With the fast discovery of diseases, the chances of detecting symptoms earlier and the probability of survival increases. It also boosts available treatment options. IBM has collaborated with Teva Pharmaceutical to discover new treatment options for respiratory and central nervous system diseases using Machine Learning algorithms such as predictive and visual analytics that run on IBM Watson Health Cloud. To gain more insights on how IBM Watson is changing the face of healthcare, check this article. Enabling precision medicine Precision Medicine revolves around healthcare practices specific to a particular patient. This includes analyzing a person’s genetic information, health history, environmental exposure, and needs and preferences to guide diagnosis for diseases and subsequent treatment. Here, machine learning algorithms are utilized to sift through vast databases of patient data to identify factors such as their genetic history and predisposition to diseases, that could strongly determine treatment success or failure. ML techniques in precision medicine exploit molecular and genomic data to assist doctors in directing therapies to patients and shed light on disease mechanisms and heterogeneity. It also predicts what diseases are likely to occur in the future and suggests methods to avoid them. Cellworks, a Life Sciences Technology company, brings together a SaaS-based platform for generating precision medicine products. Their platform analyses the genomic profile of the patient and then provides patient-specific reports for improved diagnosis and treatment. Assisting radiology and radiotherapy CTI and MRI scans for radiological diagnosis and interpretation are burdensome and laborious (not to mention, time-consuming). They involve segmentation—differentiating between healthy and infectious tissues—which when done manually has a good probability of resulting in errors and misdiagnosis. Machine Learning algorithms can speed up the segmentation process while also increasing accuracy in radiotherapy planning. ML can provide physicians information for better diagnostics which helps in obtaining accurate tumor location. It also predicts radiotherapy response to help create a personalized treatment plan. Apart from these, ML algorithms find use in medical image analysis as they learn from examples. This involves classification techniques which analyze images and available clinical information to generate the most likely diagnosis. Deep Learning can also be used for detecting lung cancer nodules in early screening CT scans and displaying the results in useful ways for clinical use. Google’s machine-learning division, DeepMind, is automating radiotherapy treatment for head and neck cancers using scans from almost 700 diagnosed patients. An ML algorithm scans the reports of symptomatic patients against these previous scans to help physicians develop a suitable treatment process. Arterys, a cloud-based platform, automates cardiac analysis using deep learning. Providing Neurocritical Care A large number of neurological diseases develop gradually or in stages, so the decay of the brain happens over time. Traditional approaches to neurological care such as peak activation, EEG epileptic spikes, Pronator drift etc., are not accurate enough to diagnose and classify neurological and psychiatric disorders. This is because they are typically used for end results assessment rather than for progressive analysis on how the brain disease develops. Moreover, timely personalized neurological treatments and diagnoses rely highly on the constant availability of an expert. Machine Learning algorithms can advance the science of detection and prediction by learning how the brain progressively develops into these conditions. Deep Learning techniques are applied in the area of neuroimaging to detect abstract and complex patterns from single-subject data to detect and diagnose brain disorders. Machine learning techniques such as SVM, RBFN, and RF are amalgamated with PDT (Pronator drift tests) to detect stroke symptoms based on quantification of proximal arm weakness using inertial sensors and signal processing. Machine Learning algorithms can also be used for detecting signs of dementia before its onset. The Douglas Mental Health University Institute uses PET scans to train ML algorithms to spot signs of dementia by analyzing it against scans of patients who have mild cognitive impairment. Then they run the scans belonging to symptomatic patients on the trained algorithm to predict the possibility of dementia. Predicting epidemic outbreaks Epidemic predictions traditionally rely on manual accounting. This includes self-reports or aggregation of information from healthcare services such as reports by different health protection agencies like CDC, NHIS, National Immunization Survey etc. However, they are time-consuming and error-prone. Thus predicting and prioritizing the outbreaks becomes challenging. ML algorithms can automatically perform analysis, improve calculations and verify information with minimal human intervention. Machine learning techniques like support vector machines and artificial neural networks can predict the epidemic potential of a disease and provide alerts for disease outbreak. They do this using data collected from satellites, and from real-time social media updates, historical information on the web, and other sources. They also use geospatial data such as temperature, weather conditions, wind speed, and other data points to predict the magnitude of impact an epidemic can cause in a particular area and to recommend necessary measures for preventing and containing them early on. AIME, a medical startup, has come up with an algorithm to predict outcome and even the epicenter of epidemics such as dengue fever before their occurrence. Better hospital management Machine Learning can bring about a change in traditional hospital management systems by envisioning hospitals as a digital patient-centric care center. These include automating routine tasks such as billing, admission and clearance, monitoring patients’ vitals etc. With administrative tasks out of the way, hospital authorities could fully focus on the care and treatment of patients. ML techniques such as computer vision can be used to feed all the vital signs of a patient directly into the EHR from the monitoring devices. Smart tracking devices are also used on patients to provide real-time whereabouts. Predictive analysis techniques provide continuous stream of real-time images and data. This analysis can sense risk and prioritize activities for the benefit of all patients. ML can also automate non-clinical functions, including pharmacy, laundry, and food delivery. The John Hopkins Hospital has its own command center that uses predictive analytics for efficient operational flow. Conclusion The digital health era focuses on health and wellness rather than diseases. The incorporation of machine learning in healthcare provides an improved patient experience, a better public health management, and reduces costs by automating manual labour. The next step in this amalgamation is a successful collaboration of clinicians and doctors with machines. This would bring about a futuristic health revolution with improved, precise, and more efficient care and treatment.
Read more
  • 0
  • 0
  • 6008

article-image-generative-adversarial-networks-gans-next-milestone-deep-learning
Savia Lobo
09 Nov 2017
7 min read
Save for later

Generative Adversarial Networks (GANs): The next milestone In Deep Learning

Savia Lobo
09 Nov 2017
7 min read
With the rise in popularity of deep learning as a concept and a paradigm, neural networks are captivating the interest of machine learning enthusiasts and developers alike, by being able to replicate the human brain for efficient predictions, image recognition, text recognition, and much more. However, can these neural networks do something more, or are they just limited to predictions? Can they self-generate new data by learning from a training dataset? Generative Adversarial networks (GANs) are here, to answer all these questions. So, what are GANs all about? Generative Adversarial Networks follow unsupervised machine learning, unlike traditional neural networks. When a neural network is taught to identify a bird, it is fed with a huge number of images including birds, as training data. Each picture is labeled before it is put to use in training the models. This labeling of data is both costly and time-consuming. So, how can you train your neural networks by giving it less data to train on? GANs are of a great help here. They cast out an easy way to train the DL algorithms by slashing out the amount of data required to train the neural network models, that too, with no labeling of data required. The architecture of a GAN includes a generative network model(G), which produces fake images or texts, and an adversarial network model--also known as the discriminator model (D)--that distinguishes between the real and the fake productions by comparing the content sent by the generator with the training data it has. Both of these are trained separately by feeding each of them with training data and a competitive goal. Source: Learning Generative Adversarial Networks GANs in action GANs were introduced by Ian Goodfellow, an AI researcher at Google Brain. He compares the generator and the discriminator models with a counterfeiter and a police officer. “You can think of this being like a competition between counterfeiters and the police,” Goodfellow said. “Counterfeiters want to make fake money and have it look real, and the police want to look at any particular bill and determine if it’s fake.” Both the discriminator and the generator are trained simultaneously to create a powerful GAN architecture. Let’s peek into how a GAN model is trained- Specify the problem statement and state the type of manipulation that the GAN model is expected to carry out. Collect data based on the problem statement. For instance, for image manipulation, a lot of images are required to be collected to feed in. The discriminator is fed with an image; one from the training set and one produced by the generator The discriminator can be termed as ‘successfully trained’ if it returns 1 for the real image and 0 for the fake image. The goal of the generator is to successfully fool the discriminator and getting the output as 1 for each of its generated image. In the beginning of the training, the discriminator loss--the ability to differentiate real and fake image or data--is minimal. As the training advances, the generator loss decreases and the discriminator loss increases, This means, the generator is now able to generate real images. Real world applications of GANs The basic application of GANs can be seen in generating photo-realistic images. But there is more to what GANs can do. Some of the instances where GANs are majorly put to use include: Image Synthesis Image Synthesis is one of the primary use cases of GANs. Here, multilayer perceptron models are used in both the generator and the discriminator to generate photo-realistic images based on the training dataset of the images. Text-to-image synthesis Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. Both Generator and Discriminator are then trained based on this encoded data. Image Inpainting Images that have missing parts or have too much of noise are given as an input to the generator which produces a near to real image. For instance, using TensorFlow framework, DCGANs (Deep Convolutional GANs), can generate a complete image from a broken image. DCGANs are a class of CNNs that stabilizes GANs for efficient usage. Video generation Static images can be transformed into short scenes with plausible motions using GANs. These GANs use scene dynamics in order to add motion to static images. The videos generated by these models are not real but illusions. Drug discovery Unlike text and image manipulation, Insilico medicine uses GANs to generate an artificially intelligent drug discovery mechanism. To do this, the generator is trained to predict a drug for a disease which was previously incurable.The task of the discriminator is to determine whether the drug actually cures the disease. Challenges in training a GAN Whenever a competition is laid out, there has to be a distinct winner. In GANs, there are two models competing against each other. Hence, there can be difficulties in training them. Here are some challenges faced while training GANs: Fair training: While training both the models, precaution has to be taken that the discriminator does not overpower the generator. If it does, the generator would fail to train effectively. On the other hand, if the discriminator is lenient, it would allow any illegitimate content to be generated. Failure to understand the number of objects and the dimensions of objects, present in a particular image. This usually occurs during the initial learning phase. For instance, GANs, at times output an image which ends up having more than two eyes, which is not normal in the real world. Sometimes, it may present a 3D image like a 2D one. This is because they cannot differentiate between the two. Failure to understand the holistic structure: GANs lack in identifying universally correct images. It may generate an image which can be totally opposed to how they look in real. For instance, a cat having an elongated body shape, or a cow standing on its hind legs, etc. Mode collapse is another challenge, which occurs when a low variation dataset is processed by a GANs. Real world includes complex and multimodal distributions, where data may have different concentrated sub-groups. The problem here is, the generator would be able to yield images based on anyone sub-group resulting in an inaccurate output. Thus, causing a mode collapse. To tackle these and other challenges that arise while training GANs, researchers have come up with DCGANs (Deep Convolutional GANs), WassersteinGANs, CycleGANs to ensure fair training, enhance accuracy, and reduce the training time. AdaGANs are implemented to eliminate mode collapse problem. Conclusion Although the adoption of GANs is not as widespread as one might imagine, there’s no doubt that they could change the way unsupervised machine learning is used today. It is not too far-fetched to think that their implementation in the future could find practical applications in not just image or text processing, but also in domains such as cryptography and cybersecurity. Innovations in developing newer GAN models with improved accuracy and lesser training time is the key here - but it is something surely worth keeping an eye on.
Read more
  • 0
  • 0
  • 4571

article-image-8-myths-rpa-robotic-process-automation
Savia Lobo
08 Nov 2017
9 min read
Save for later

8 Myths about RPA (Robotic Process Automation)

Savia Lobo
08 Nov 2017
9 min read
Many say we are on the cusp of the fourth industrial revolution that promises to blur the lines between the real, virtual and the biological worlds. Amongst many trends, Robotic Process Automation (RPA) is also one of those buzzwords surrounding the hype of the fourth industrial revolution. Although poised to be a $6.7 trillion industry by 2025, RPA is shrouded in just as much fear as it is brimming with potential. We have heard time and again how automation can improve productivity, efficiency, and effectiveness while conducting business in transformative ways. We have also heard how automation and machine-driven automation, in particular, can displace humans and thereby lead to a dystopian world. As humans, we make assumptions based on what we see and understand. But sometimes those assumptions become so ingrained that they evolve into myths which many start accepting as facts. Here is a closer look at some of the myths surrounding RPA. [dropcap]1[/dropcap] RPA means robots will automate processes The term robot evokes in our minds a picture of a metal humanoid with stiff joints that speaks in a monotone. RPA does mean robotic process automation. But the robot doing the automation is nothing like the ones we are used to seeing in the movies. These are software robots that perform routine processes within organizations. They are often referred to as virtual workers/digital workforce complete with their own identity and credentials. They essentially consist of algorithms programmed by RPA developers with an aim to automate mundane business processes. These processes are repetitive, highly structured, fall within a well-defined workflow, consist of a finite set of tasks/steps and may often be monotonous and labor intensive. Let us consider a real-world example here - Automating the invoice generation process. The RPA system will run through all the emails in the system, and download the pdf files containing details of the relevant transactions. Then, it would fill a spreadsheet with the details and maintain all the records therein. Later, it would log on to the enterprise system and generate appropriate invoice reports for each detail in the spreadsheet. Once the invoices are created, the system would then send a confirmation mail to the relevant stakeholders. Here, the RPA user will only specify the individual tasks that are to be automated, and the system will take care of the rest of the process. So, yes, while it is true that RPA involves robots automating processes, it is a myth that these robots are physical entities or that they can automate all processes. [dropcap]2[/dropcap] RPA is useful only in industries that rely heavily on software “Almost anything that a human can do on a PC, the robot can take over without the need for IT department support.” - Richard Bell, former Procurement Director at Averda RPA is a software which can be injected into a business process. Traditional industries such as banking and finance, healthcare, manufacturing etc that have significant tasks that are routine and depend on software for some of their functioning can benefit from RPA. Loan processing and patient data processing are some examples. RPA, however, cannot help with automating the assembly line in a manufacturing unit or with performing regular tests on patients. Even in industries that maintain daily essential utilities such as cooking gas, electricity, telephone services etc RPA can be put to use for generating automated bills, invoices, meter-readings etc. By adopting RPA, businesses irrespective of the industry they belong to can achieve significant cost savings, operational efficiency, and higher productivity. To leverage the benefits of RPA, rather than understanding the SDLC process, it is important that users have a clear understanding of business workflow processes and domain knowledge. Industry professionals can be easily trained on how to put RPA into practice. The bottom line - RPA is not limited to industries that rely heavily on software to exist. But it is true that RPA can be used only in situations where some form of software is used to perform tasks manually. [dropcap]3[/dropcap] RPA will replace humans in most frontline jobs Many organizations employ a large workforce in frontline roles to do routine tasks such as data entry operations, managing processes, customer support, IT support etc. But frontline jobs are just as diverse as the people performing them. Take sales reps for example. They bring new business through their expert understanding of the company’s products, their potential customer base coupled with the associated soft skills. Currently, they spend significant time on administrative tasks such as developing and finalizing business contracts, updating the CRM database, making daily status reports etc. Imagine the spike in productivity if these aspects could be taken off the plates of sales reps and they could just focus on cultivating relationships and converting leads. By replacing human efforts in mundane tasks within frontline roles, RPA can help employees focus on higher value-yielding tasks. In conclusion, RPA will not replace humans in most frontline jobs. It will, however, replace humans in a few roles that are very rule-based and narrow in scope such as simple data entry operators or basic invoice processing executives. In most frontline roles like sales or customer support, RPA is quite likely to change significantly at least in some ways how one sees their job responsibilities. Also, the adoption of RPA will generate new job opportunities around the development, maintenance, and sale of RPA based software. [dropcap]4[/dropcap] Only large enterprises can afford to deploy RPA The cost of implementing and maintaining the RPA software and training employees to use it can be quite high. This can make it an unfavorable business proposition for SMBs with fairly simple organizational processes and cross-departmental considerations. On the other hand, large organizations with higher revenue generation capacity, complex business processes, and a large army of workers can deploy an RPA system to automate high-volume tasks quite easily and recover that cost within a few months.   It is obvious that large enterprises will benefit from RPA systems due to the economies of scale offered and faster recovery of investments made. SMBs (Small to medium-sized businesses) can also benefit from RPA to automate their business processes. But this is possible only if they look at RPA as a strategic investment whose cost will be recovered over a longer time period of say 2-4 years. [dropcap]5[/dropcap] RPA adoption should be owned and driven by the organization's IT department The RPA team handling the automation process need not be from the IT department. The main role of the IT department is providing necessary resources for the software to function smoothly. An RPA reliability team which is trained in using RPA tools does not include IT professionals but rather business operations professionals. In simple terms, RPA is not owned by the IT department but by the whole business and is driven by the RPA team. [dropcap]6[/dropcap] RPA is an AI virtual assistant specialized to do a narrow set of tasks An RPA bot performs a narrow set of tasks based on the given data and instructions. It is a system of rule-based algorithms which can be used to capture, process and interpret streams of data, trigger appropriate responses and communicate with other processes. However, it cannot learn on its own - a key trait of an AI system. Advanced AI concepts such as reinforcement learning and deep learning are yet to be incorporated in robotic process automation systems. Thus, an RPA bot is not an AI virtual assistant, like Apple’s Siri, for example. That said, it is not impractical to think that in the future, these systems will be able to think on their own, decide the best possible way to execute a business process and learn from its own actions to improve the system. [dropcap]7[/dropcap] To use the RPA software, one needs to have basic programming skills Surprisingly, this is not true. Associates who use the RPA system need not have any programming knowledge. They only need to understand how the software works on the front-end, and how they can assign tasks to the RPA worker for automation. On the other hand, RPA system developers do require some programming skills, such as knowledge of scripting languages. Today, there are various platforms for developing RPA tools such as UIPath, Blueprism and more, which empower RPA developers to build these systems without any hassle, reducing their coding responsibilities even more. [dropcap]8[/dropcap] RPA software is fully automated and does not require human supervision This is a big myth. RPA is often misunderstood as a completely automated system. Humans are indeed required to program the RPA bots, to feed them tasks for automation and to manage them. The automation factor here lies in aggregating and performing various tasks which otherwise would require more than one human to complete. There’s also the efficiency factor which comes into play - the RPA systems are fast, and almost completely avoid faults in the system or the process that are otherwise caused due to human error. Having a digital workforce in place is far more profitable than recruiting human workforce. Conclusion One of the most talked about areas in terms of technological innovations, RPA is clearly still in its early days and is surrounded by a lot of myths. However, there’s little doubt that its adoption will take off rapidly as RPA systems become more scalable, more accurate and deploy faster. AI, cognitive, and Analytics-driven RPA will take it up a notch or two, and help the businesses improve their processes even more by taking away dull, repetitive tasks from the people. Hype can get ahead of the reality, as we've seen quite a few times - but RPA is an area definitely worth keeping an eye on despite all the hype.
Read more
  • 0
  • 0
  • 8200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-dl-frameworks-tensorflow-vs-cntk
Aaron Lazar
30 Oct 2017
6 min read
Save for later

The Deep Learning Framework Showdown: TensorFlow vs CNTK

Aaron Lazar
30 Oct 2017
6 min read
The question several Deep Learning engineers may ask themselves is: Which is better, TensorFlow or CNTK? Well, we're going to answer that question for you, taking you through a closely fought match between the two most exciting frameworks. So, here we are, ladies and gentlemen, it's fight night and it's a full house. In the Red corner, weighing in at two hundred and seventy pounds of Python and topping out at over ten thousand frames per second; managed by the American tech giant, Google; we have the mighty, the beefy, TensorFlow! In the Blue corner, weighing in at two hundred and thirty pounds of C++ muscle, we have, one of the top toolkits that can comfortably scale beyond a single machine. Managed by none other than Microsoft, it's fast, it's furious, it's CNTK aka the Microsoft Cognitive Toolkit! And we're into Round One… TensorFlow and CNTK are looking quite menacingly at each other and are raging to take down their opponents. TensorFlow seems pleased that its compile times are considerably faster than its successor, Theano. Although, it looks like happiness came a tad bit soon. CNTK, light and bouncy on it's feet, comes straight out of nowhere with a whopping seventy thousand frames/second upper cut, knocking TensorFlow to the floor. TensorFlow looks like it's in no mood to give up anytime soon. It makes itself so simple to use and understand that even students can pick it up and start training their own models. This isn't the case with CNTK, as it begs to shed its complexity. On the other hand, CNTK seems to be thrashing TensorFlow in terms of 3D convolution, where CNTK can clearly recognize images from streaming content. TensorFlow also tries its best to run LSTM RNNs, but in vain. The crowd keeps cheering on… Wait a minute...are they calling out for TensorFlow? Yes they are! There's hardly any cheering for CNTK. This is embarrassing! Looks like its community support can't match up to TensorFlow's. And ladies and gentlemen, that does make a difference - we can see TensorFlow improving on several fronts and gradually getting back in the game! TensorFlow huffs and puffs as it tries to prove that it's not just about deep learning and that it has tools in the pocket that can support other algorithms such as reinforcement learning. It conveniently whips out the TensorBoard, and drops CNTK to the floor with its beautiful visualizations. TensorFlow now has the upper hand and is trying hard to pin CNTK to the floor and tries to use its R support to finish it off. But CNTK tactfully breaks loose and leaves TensorFlow on the floor - still not ready to be used in production. And there goes the bell for Round One! Both fighters look exhausted but you can see a faint twinkle in TensorFlow's eye, primarily because it survived Round One. Google seems to be working hard to prep it for Round Two and is making several improvements in terms of speed, flexibility and majorly making it ready for production. Meanwhile, Microsoft boosts CNTK's spirits with a shot of Python APIs in its blood. As it moves towards reaching version 2.0, there are a lot of improvements to CNTK, wherein, Microsoft has ensured that it's not left behind, like having a backend for Keras, which puts it on par with TensorFlow. Moreover, there seem to be quite a few experimental features that it looks ready to enter the ring with, like the Java API for example. It's the final round and boy, are these two into a serious stare-down! The referee waves them in and off they are. CNTK needs to get back at TensorFlow. Comfortably supporting multiple GPUs and CPUs out of the box, across both the Microsoft and Linux platforms, it has an advantage over TensorFlow. Is it going to use that trump card? Yes it is! A thousand GPUs and a hundred machines in, and CNTK is raining blows on TensorFlow. TensorFlow clearly drops the ball when it comes to multiple machines, and it rather complicates things. It's high time that TensorFlow turned the tables. Lo and behold! It shows off its mobile deep learning capabilities with TensorFlow Lite, clearly flipping CNTK flat on its back. This is revolutionary and a tremendous breakthrough for TensorFlow! CNTK, however, is clearly the people's choice when it comes to language compatibility. With support for C++, Python, C#/.NET and now Java, it's clearly winning in this area. Round Two is coming to an end, ladies and gentlemen and it's a neck to neck battle out there. We're not sure the judges are going to be able to choose a clear winner, from the looks of it. And…. there goes the bell! While the scores are being tallied, we go over to the teams and some spectators for some gossip on the what's what of deep learning. Did you know having multiple machine support is a huge advantage? It increases speed and efficiency by almost 10 times! That's something! We also got to know that TensorFlow is training hard and is picking up positives from its rival, CNTK. There are also rumors about a new kid called MXNet (read about it here), that has APIs in R, Python and even in Julia! This makes it one helluva framework in terms of flexibility and speed. In fact, AWS is already implementing it while Apple also is rumored to be using it. Clearly, something to watch out for. And finally, the judges have made their decision. Ladies and gentlemen, after two rounds of sheer entertainment, we have the results... TensorFlow CNTK Processing speed 0 1 Learning curve 1 0 Production readiness 0 1 Community support 1 0 CPU, GPU computation support 0 1 Mobile deep learning 1 0 Multiple language compatibility 0 1 It's a unanimous decision and just as we thought, CNTK is the heavyweight champion! CNTK clearly beat TensorFlow in terms of performance, because of its flexibility, speed and ability to use in production! As a Deep Learning engineer, should you be wanting to use one of these frameworks in your tasks, you should check out their features thoroughly, test them out with a test dataset and then implement them to your actual data. After all, it's the choices we make that define a win or a loss - simplicity over resource utilisation, or speed over platform, we must choose our tools wisely. For more information on the kind of tests that both the tools have been put through, read the Research Paper presented by Shaohuai Shi, Qiang Wang, Pengfei Xu and Xiaowen Chu from the Department of Computer Science, Hong Kong Baptist University and these benchmarks.
Read more
  • 0
  • 1
  • 17092

article-image-top-4-chatbot-development-frameworks-developers
Sugandha Lahoti
20 Oct 2017
8 min read
Save for later

Top 4 chatbot development frameworks for developers

Sugandha Lahoti
20 Oct 2017
8 min read
The rise of the bots is nigh! If you can imagine a situation involving a dialog, there is probably a chatbot for that. Just look at the chatbot market - text-based email/SMS bots, voice-based bots, bots for customer support, transaction-based bots, entertainment bots and many others. A large number of enterprises, from startups to established organizations, are seeking to invest in this sector. This has also led to an increase in the number of platforms used for chatbot building. These frameworks incorporate AI techniques along with natural language processing capabilities to assist developers in building and deploying chatbots. Let’s start with how a chatbot typically works before diving into some of the frameworks. Understand: The first step for any chatbot is to understand the user input. This is made possible using pattern matching and intent classification techniques. ‘Intents’ are the tasks that users might want to perform with a chatbot. Machine learning, NLP and speech recognition techniques are typically used to identify the intent of the message and extract named entities. Entities are the specific pieces of information extracted from the user’s response i.e. the content associated with an intent. Respond: After understanding, the next goal is to generate a response. This is based on the current input message and the context of the conversation. After specifying the intents and entities, a dialog flow is constructed. This is basically the replies/feedback expected from a chatbot. Learn: Chatbots use AI techniques such as natural language understanding and pattern recognition to store and distinguish between the context of the information provided, and elicit a suitable response for future replies. This is important because different requests might have different meanings depending on previous requests. Top chatbot development frameworks A bot development framework is a set of predefined classes, functions, and utilities that a developer can use to build chatbots easier and faster. They vary in the level of complexity, integration capabilities, and functionalities. Let us look at some of the development platforms utilized for chatbot building. API.AI API.AI, a code based framework with a simple web-based interface, allows users to build engaging voice and text-based conversational apps using a large number of libraries and SDKs including Android, iOS, Webkit HTML5, Node.js, and Python API. It also supports nearly 32 one-click platform integrations such as Google, Facebook Messenger, Twitter and Skype to name a few. API.AI makes use of an agent - a container that transforms natural language based user requests into actionable data. The software tries to find the intent behind a user’s reply and matches it to the default or the closest match. After intent matching, it executes the actions and responses the developer has defined for that intent. API.AI also makes use of entities. Once the intents and entities are specified, the bot is trained. API.AI’s training module efficiently tracks each user’s request and lets developers see how they are parsed and matched to an intent. It also allows for correction of any errors and change requests thus retraining the bot. API.AI streamlines the entire bot-creating process by helping developers provide domain-specific knowledge that is unique to a bot’s needs while working on speech recognition, intent and context management in the backend. Google has recently partnered with API.AI to help them build conversational tools like Apple’s Siri. Microsoft Bot Framework Microsoft Bot Framework allows building and deployment of chatbots across multiple platforms and services such as web, SMS, non-Microsoft platforms, Office 365, Skype etc. The Bot Framework includes two components - The Bot Builder and the Microsoft Cognitive Services. The Bot Builder comprises of two full-featured SDKs - for the.NET and the Node.js platforms along with an emulator for testing and debugging. There’s also a set of RESTful APIs for building code in other languages. The SDKs support features for simple and easy interactions between bots. They also have a large collection of prebuilt sample bots for the developer to choose from. The Microsoft Cognitive Services is a collection of intelligent APIs that simplify a variety of AI tasks such as allowing the system to understand and interpret the user's needs using natural language in just a few lines of code. These APIs allow integration to most modern languages and platforms and constantly improve, learn, and get smarter. Microsoft created the AI Inner Circle Partner Program to work hand in hand with industry to create AI solutions. Their only partner in the UK is ICS.AI who build conversational AI solutions for the UK's public sector. ICS are the first choice for many organisations due to their smart solutions that scale and serve to improve services for the general public. Developers can build bots in the Bot Builder SDK using C# or Node.js. They can then add AI capabilities with Cognitive Services. Finally, they can register the bots on the developer portal, connecting it to users across platforms such as Facebook and Microsoft Teams and also deploy it on the cloud like Microsoft Azure. For a step-by-step guide for chatbot building using Microsoft Bot Framework, you can refer to one of our books on the topic. Sabre Corporation, a customer service provider for travel agencies, have recently announced the development of an AI-powered chatbot that leverages Microsoft Bot Framework and Microsoft Cognitive Services. Watson Conversation IBM’s Watson Conversation helps build chatbot solutions that understand natural-language input and use machine learning to respond to customers in a way that simulates conversations between humans. It is built on a neural network of one million Wikipedia words. It offers deployment across a variety of platforms including mobile devices, messaging platforms, and robots. The platform is robust and secure as IBM allows users to opt out of data sharing. The IBM Watson Tone Analyzer service can help bots understand the tone of the user’s input for better management of the experience. The basic steps to create a chatbot using Watson Conversation are as follows. We first create a workspace - a place for configuring information to maintain separate intents, user examples, entities, and dialogues for each application. One workspace corresponds to one bot. Next, we create Intents. Watson Conversation makes use of multiple conditioned responses to distinguish between similar intents. For example, instead of building specific intents for locations of different places, it creates a general intent “location” and adds an entity to capture the response, like the “location- bedroom” - to the right, near the stairs, “location-kitchen”- to the left. The third step is entity establishment. This involves grouping entities that might trigger a similar response in the dialog. The dialog flow, thus generated after specifying the intents and entities, goes through testing followed by embedding this into an application. It is then connected with other services by using the conversation API. Staples, an office supply retailing firm, uses Watson Conversation in their “Easy Systems” to simplify the customer’s shopping experience. CXP Designer and Aspect NLU Aspect Customer Experience Platform is an application lifecycle management tool to build text and voice-based applications such as chatbots. It provides deployment options across multiple communication channels like text, voice, mobile web and social media networks. The Aspect CXP typically includes a CXP designer to build chatbots and the inbuilt Aspect NLU to provide advanced natural language capabilities. CXP designer works by creating dialog objects to provide a menu of options for frontend as well as backend. Menu items for the frontend are used to create intents and modules within those intents. The developer can then modify labels (of those intents and modules) manually or use the Aspect NLU to disambiguate similar questions for successful extraction of meaning and intent. The Aspect NLU includes tools for spelling correction, linguistic lexicons such as nouns, verbs etc. and options for detecting and extracting common data types such as date, time, numbers, etc. It also allows a developer to modify the meaning extraction based on how they want it if they want it! CXP designer also allows skipping of certain steps in chatbots. For instance, if the user has already provided their tracking id for a particular package, the chatbot will skip the prompt of asking them the tracking id again. With Aspect CXP, developers can create and deploy complex chatbots. Radisson Blu Edwardian, a hotel in London, has collaborated with Aspect software to build an SMS based, AI virtual host. Conclusion Another popular chatbot development platform worth mentioning is the Facebook messenger with over 100,000 monthly active bots, but without cross-platform deployment features. The above bot frameworks are typically used by developers to build chatbots from scratch and require some programming skills. However, there has been a rise in automated bot development tools of late. Some of these include Chatfuel and Motion AI and typically involve drag and drop functionalities. With such tools, beginners and non-programmers can create and deploy chatbots within few minutes. But, they lack the extended functionalities supported by typical code based frameworks such as the flexibility to store data, produce analytics or incorporate customized AI tasks. Every chatbot development system, whether framework or tool, serves a different purpose. Choosing the right one depends on the type of application to build, organizational needs, and the developer’s expertise.
Read more
  • 0
  • 0
  • 9634

article-image-introducing-intelligent-apps-a-smarter-way-into-the-future
Amarabha Banerjee
19 Oct 2017
6 min read
Save for later

Introducing Intelligent Apps

Amarabha Banerjee
19 Oct 2017
6 min read
We are a species obsessed with ‘intelligence’ since gaining consciousness. We have always been inventing ways to make our lives better through sheer imagination and application of our intelligence. Now, it comes as no surprise that we want our modern day creations to be smart as well - be it a web app or a mobile app. The first question that comes to mind then is what makes an application ‘intelligent’? A simple answer for budding developers is that intelligent apps are apps that can take intuitive decisions or provide customized recommendations/experience to their users based on insights drawn from data collected from their interaction with humans. This brings up a whole set of new questions: How can intelligent apps be implemented, what are the challenges, what are the primary application areas of these so-called Intelligent apps, and so on. Let’s start with the first question. How can intelligence be infused into an app? The answer has many layers just like an app does. The monumental growth in data science and its underlying data infrastructure has allowed machines to process, segregate and analyze huge volumes of data in limited time. Now, it looks set to enable machines to glean meaningful patterns and insights from the very same data. One such interesting example is predicting user behavior patterns. Like predicting what movies or food or brand of clothing the user might be interested in, what songs they might like to listen to at different times of their day and so on. These are, of course, on the simpler side of the spectrum of intelligent tasks that we would like our apps to perform. Many apps currently by Amazon, Google, Apple, and others are implementing and perfecting these tasks on a day-to-day basis. Complex tasks are a series of simple tasks performed in an intelligent manner. One such complex task would be the ability to perform facial recognition, speech recognition and then use it to perform relevant daily tasks, be it at home or in the workplace. This is where we enter the realm of science fiction where your mobile app would recognise your voice command while you are driving back home and sends automated instructions to different home appliances, like your microwave, AC, and your PC so that your food is served hot when you reach home, your room is set at just the right temperature and your PC has automatically opened the next project you would like to work on. All that happens while you enter your home keys-free thanks to a facial recognition software that can map your face and ID you with more than 90% accuracy, even in low lighting conditions. APIs like IBM Watson, AT&T Speech, Google Speech API, the Microsoft Face API and some others provide developers with tools to incorporate features such as those listed above, in their apps to create smarter apps. It sounds almost magical! But is it that simple? This brings us to the next question. What are some developmental challenges for an intelligent app? The challenges are different for both web and mobile apps. Challenges for intelligent web apps For web apps, choosing the right mix of algorithms and APIs that can implement your machine learning code into a working web app, is the primary challenge. plenty of Web APIs like IBM Watson, AT&T speech etc. are available to do this. But not all APIs can perform all the complex tasks we discussed earlier. Suppose you want an app that successfully performs both voice and speech recognition and then also performs reinforcement learning by learning from your interaction with it. You will have to use multiple APIs to achieve this. Their integration into a single app then becomes a key challenge. Here is why. Every API has its own data transfer protocols and backend integration requirements and challenges. Thus, our backend requirement increases significantly, both in terms of data persistence and dynamic data availability and security. Also, the fact that each of these smart apps would need customized user interface designs, poses a challenge to the front end developer. The challenge is to make a user interface so fluid and adaptive that it supports the different preferences of different smart apps. Clearly, putting together a smart web app is no child’s play. That’s why, perhaps, smart voice-controlled apps like Alexa are still merely working as assistants and providing only predefined solutions to you. Their ability to execute complex voice-based tasks and commands is fairly low, let alone perform any non-voice command based task. Challenges for intelligent mobile apps For intelligent mobile apps, the challenges are manifold. A key reason is network dependency for data transfer. Although the advent of 4G and 5G mobile networks has greatly improved mobile network speed, the availability of network and the data transfer speeds still pose a major challenge. This is due to the high volumes of data that intelligent mobile apps require to perform efficiently. To circumvent this limitation, vendors like Google are trying to implement smarter APIs in the mobile’s local storage. But this approach requires a huge increase in the mobile chip’s computation capabilities - something that’s not currently available. Maybe that’s why Google has also hinted at jumping into the chip manufacturing business if their computation needs are not met. Apart from these issues, running multiple intelligent apps at the same time would also require a significant increase in the battery life of mobile devices. Finally, comes the last question. What are some key applications of intelligent apps? We have explored some areas of application in the previous sections keeping our focus on just web and mobile apps. Broadly speaking, whatever makes our daily life easier, is ideally a potential application area for intelligent apps. From controlling the AC temperature automatically to controlling the oven and microwave remotely using the vacuum cleaner (of course the vacuum cleaner has to have robotic AI capabilities) to driving the car, everything falls in the domain of intelligent apps. The real questions for us are What can we achieve with our modern computation resources and our data handling capabilities? How can mobile computation capabilities and chip architecture be improved drastically so that we can have smart apps perform complex tasks faster and ease our daily workflow? Only the future holds the answer. We are rooting for the day when we will rise to become a smarter race by delegating lesser important yet intelligent tasks to our smarter systems by creating intelligent web and mobile apps efficiently and effectively. The culmination of these apps along with hardware driven AI systems could eventually lead to independent smart systems - a topic we will explore in the coming days.
Read more
  • 0
  • 0
  • 5402

article-image-ai-chip-wars-brainwave-microsofts-answer-googles-tpu
Amarabha Banerjee
18 Oct 2017
5 min read
Save for later

AI chip wars: Is Brainwave Microsoft's Answer to Google's TPU?

Amarabha Banerjee
18 Oct 2017
5 min read
When Google decided to design their own chip with TPU, it generated a lot of buzz for faster and smarter computations with its ASIC-based architecture. Google claimed its move would significantly enable intelligent apps to take over, and industry experts somehow believed a reply from Microsoft was always coming (remember Bing?). Well, Microsoft has announced its arrival into the game – with its own real-time AI-enabled chip called Brainwave. Interestingly, as the two tech giants compete in chip manufacturing, developers are certainly going to have more options now, while facing the complex computational processes of modern day systems. What is Brainwave? Until recently, Nvidia was the dominant market player in the microchip segment, creating GPUs (Graphics Processing Unit) for faster processing and computation. But after Google disrupted the trend with its TPU (tensor processing unit) processor, the surprise package in the market has come from Microsoft. More so because its ‘real-time data processing’ Brainwave chip claims to be faster than the Google chip (the TPU 2.0 or the Cloud TPU chip). The one thing that is common between both Google and Microsoft chips is that they can both train and simulate deep neural networks much faster than any of the existing chips. The fact that Microsoft has claimed that Brainwave supports Real-Time AI systems with minimal lag, by itself raises an interesting question - are we looking at a new revolution in the microchip industry? The answer perhaps lies in the inherent methodology and architecture of both these chips (TPU and Brainwave) and the way they function. What are the practical challenges of implementing them in real-world applications? The Brainwave Architecture: Move over GPU, DPU is here In case you are wondering what the hype with Microsoft’s Brainwave chip is about, the answer lies directly in its architecture and design. The present-day complex computational standards are defined by high-end games for which GPUs (Graphical Processing Units) were originally designed. Brainwave differs completely from the GPU architecture: the core components of a Brainwave chip are Field Programmable Gate Arrays or FPGAs. Microsoft has developed a huge number of FPGA modules on top of which DNN (Deep Neural Network) layers are synthesized. Together, this setup can be compared with something similar to Hardware Microservices where each task is assigned by a software to different FPGA and DNN modules. These software controlled Modules are called DNN Processing Units or DPUs. This eliminates the latency of the CPU and the need for data transfer to and fro from the backend. The two methodologies involved here are seemingly different in their architecture and application: one is the hard DPU and the other is the Soft DPU. While Microsoft has used the soft DPU approach where the allocation of memory modules are determined by software and the volume of data at the time of processing, the hard DPU has a predefined memory allocation which doesn’t allow for flexibility so vital in real-time processing. The software controlled feature is exclusive to Microsoft, and unlike other AI processing chips, Microsoft have developed their own easy to process data types that are faster to process. This enables the Brainwave chip to perform near real-time AI computations easily.  Thus, in a way Microsoft brainwave holds an edge over the Google TPU when it comes to real-time decision making and computation capabilities. Brainwave’s edge over TPU 2 - Is it real time? The reason Google had ventured out into designing their own chips was their need to increase the number of data centers, with the increase in user queries. They had realized the fact that instead of running data queries via data centers, it would be far more plausible if the computation was performed in the native system. That’s where they needed more computational capabilities than what the modern day market leaders like Intel X86 Xeon processors and the Nvidia Tesla K80 GPUs offered. But Google opted for Application Specific Integrated Circuits (ASIC) instead of FPGAs, the reason being that it was completely customizable. It was not specific for one particular Neural Network but was rather applicable for multiple Networks. The trade-off for this ability to run multiple Neural Networks was of course Real Time computation which Brainwave could achieve because of using the DPU architecture. The initial data released by Microsoft shows that the Brainwave has a data transfer bandwidth of 20TB/sec, 20 times faster than the latest Nvidia GPU chip. Also, the energy efficiency of Brainwave is claimed to be 4.5 times better than the current chips. Whether Google would up their ante and improve on the existing TPU architecture to make it suitable for real-time computation is something only time can tell. [caption id="attachment_1064" align="alignnone" width="644"] Source: Brainwave_HOTCHIPS2017 PPT on Microsoft Research Blog[/caption] Future outlook and challenges Microsoft is yet to declare the benchmarking results for the Brainwave chip. But Microsoft Azure customers most definitely look forward to the availability of Brainwave chip for faster and better computational abilities. What is even more promising is Brainwave works seamlessly with Google’s TensorFlow and Microsoft’s own CNTK framework. Tech startups like Rigetti, Mythic and Waves are trying to create mainstream applications which will employ AI and quantum computation techniques. This will bring AI to the masses, by creating practical AI driven applications for daily consumers, and these companies have shown a keen interest in both the Microsoft and the Google AI chips. In fact, Brainwave will be most suited for these companies such as the above which are looking to use AI capabilities for everyday tasks, as they are less in number because of the limited computational capabilities of the current chips. The challenges with all AI chips, including Brainwave, will still revolve around their data handling capabilities, the reliability of performance, and on improving memory capabilities of our current hardware systems.
Read more
  • 0
  • 0
  • 5221
article-image-what-is-automated-machine-learning
Wilson D'souza
17 Oct 2017
6 min read
Save for later

What is Automated Machine Learning (AutoML)?

Wilson D'souza
17 Oct 2017
6 min read
Are you a proud machine learning engineer who hates that the job tests your limits as a human being? Do you dread the long hours of data experimentation and data modeling that leave you high and dry? Automated Machine Learning or AutoML can put that smile back on your face. A self-replicating AI algorithm, AutoML is the latest tool that is being applied in the real world today, and AI market leaders such as Google have made a significant investment to research further in this field. AutoML has seen a steep rise in research and new tools over the last couple of years, but its recent mention during Google IO 2017 has piqued the interest of the entire developer community. What is AutoML all about and what makes it so interesting? Evolution of automated machine learning Before we try to understand AutoML, let’s look at what triggered the need for automated machine learning. Until now, building machine learning models that work in the real world has been a domain ruled by researchers, scientists, and machine learning experts. The process of manually designing a machine learning model involves several complex and time-consuming steps such as: Pre-processing data Selecting appropriate ML architecture Optimizing hyperparameters Constructing models Evaluating suitability of models Add to this, the several layers of neural networks required for an efficient ML architecture -- an n-layer neural network could result in nn potential networks. This level of complexity could be overwhelming for the millions of developers who are keen on embracing machine learning. AutoML tries to solve this problem of complexity and makes machine learning accessible to a large group of developers by automating routine but complex tasks such as the design of neural networks. Since this cuts down development time significantly and takes care of several complex tasks involved in building machine learning models, AutoML is expected to play a crucial role in bringing machine learning to the mainstream. Approaches to automating model generation   With a growing body of research, AutoML aims to automate the following tasks in the field of machine learning: Model Selection Parameter Tuning Meta Learning Ensemble Construction It does this by using a wide range of algorithms and approaches such as: Bayesian Optimization: One of the fundamental approaches for automating model generation is to use Bayesian methods for hyperparameter tuning. By modeling the uncertainty of parameter performance, different variations of the model can be explored which offers an optimal solution. Meta-learning and Ensemble Construction: To further increase AutoML efficiency, meta-learning techniques are used to find and pick optimal hyperparameter settings. These techniques can be further coupled with auto-ensemble construction techniques to create effective ensemble model from a collection of models that undergo optimization. Using these techniques, a high level of accuracy can be achieved throughout the process of automated generation of models. Genetic Programming: Certain tools like TPOT also make use of a variation of genetic programming (tree-based pipeline optimization) to automatically design and optimize ML models that offer highly accurate results for a given set of data. This approach makes use of operators at various stages of the data pipeline which are assembled together in the form of a tree-based pipeline. These are then further optimized and newer pipelines are auto-generated using genetic programming. If these weren’t enough, Google in its recent posts disclosed that they are using reinforcement learning approach to give a further push to develop efficient AutoML techniques. What are some tools in this area? Although it’s still early days, we can already see some frameworks emerging to automate the generation of your machine learning models.   Auto-sklearn: Auto-sklearn, the tool which won the ChaLearn AutoML Challenge, provides a wrapper around the popular Python library scikit-learn to automate machine learning. This is a great addition to the ever-growing ecosystem of Python data science tools. Built on top of Bayesian optimization, it takes away the hassle of algorithm selection, parameter tuning, and ensemble construction while building machine learning pipelines. With auto-sklearn, developers can create rapid iterations and refinements to their machine learning models, thereby saving a significant amount of development time. The tool is still in its early stages of development, so expect a few hiccups while using it. DataRobot: DataRobot offers a machine learning automation platform to all levels of data scientists aimed at significantly reducing the time to build and deploy predictive models. Since it’s a cloud platform it offers great power and speed throughout the process of automating the model generation process. In addition to automating the development of predictive models, it offers other useful features such as a web-based interface, compatibility with several leading tools such as Hadoop and Spark, scalability, and rapid deployment. It’s one of those few machine learning automation platforms which are ready for industry use. TPOT: TPOT is yet another Python tool meant for automated machine learning. It uses a genetic programming approach to iterate and optimize machine learning models. As in the case of auto-sklearn, TPOT is also built on top of scikit-learn. It has a growing interest level on GitHub with 2400 stars and has observed a 100% rise in the past one year alone. Its goals, however, are quite similar to those of Auto-sklearn: feature construction, feature selection, model selection, and parameter optimization. With these goals in mind, TPOT aims at building efficient machine learning systems in lesser time and with better accuracy. Will automated machine learning replace developers? AutoML as a concept is still in its infancy. But as market leaders like Google, Facebook, and others research more in this field, AutoML will keep evolving at a brisk pace. Assuming that AutoML would replace humans in the field of data science, however, is a far-fetched thought and nowhere near reality. Here is why. AutoML as a technique is meant to make the neural network design process efficient rather than replace humans and researchers in the field of building neural networks. The primary goal of AutoML is to help experienced data scientists be more efficient at their work i.e., enhance productivity by a huge margin and to reduce the steep learning curve for the many developers who are keen on designing ML models - i.e., make ML more accessible. With the advancements in this field, it’s exciting times for developers to embrace machine learning and start building intelligent applications. We see automated machine learning as a game changer with the power to truly democratize the building of AI apps. With automated machine learning, you don’t have to be a data scientist to develop an elegant AI app!
Read more
  • 0
  • 0
  • 6202

article-image-neuroevolution-step-toward-thinking-machine
Amarabha Banerjee
16 Oct 2017
9 min read
Save for later

Neuroevolution: A step towards the Thinking Machine

Amarabha Banerjee
16 Oct 2017
9 min read
“I propose to consider the question - Can machines think?” - Alan Turing The goal for AI research has always remained the same - create a machine that has human-like decision-making capabilities based on available information. This includes the machine’s ability to analyze and process huge amounts of data and then make a meaningful inference from it. Machine learning, deep learning and other old and new paradigms in AI research are all attempts at imparting complex decision-making capabilities to machines or systems. Alan Turing’s famous test for AI has set the standards over the years for what qualifies as a smart AI i.e. a thinking machine. The imitation game is about an AI/ bot interacting with a human anonymously, in a way that the human can’t decipher the fact that it’s a machine. This not-so-trivial test has seen many adaptations over the years like the modern day Tokyo test. These tests set challenging boundaries that machines must cross to be considered capable of possessing intelligence. Neuroevolution, a few decades old theory, remodeled in a modern day format with the help of Neural and Deep Neural Networks, promises to challenge these boundaries and even break them. With neuroevolution, machines aim to solve complex problems on their own with satisfactory levels of accuracy even though they do not know how to achieve those results.   Neuroevolution: The Essence “If a wild animal habitually performs some useless activity, natural selection will favor rival individuals who instead devote time to surviving and reproducing...Ruthless utilitarianism trumps, even if it doesn’t always seem that way.” - Richard Dawkins This is the essence of Neuroevolution. But the process itself is not as simple. Just like the human evolution process, in the beginning, a set of algorithms work on a problem. The algorithms that show an inclination to solve the problem in the right way are selected for the next stage. They then undergo random minor mutations - i.e., small logical changes in the inherent algorithm structure. Next, we check whether these changes enable the algorithms to achieve the same result with better accuracy or efficiency. The successful ones then move to the next stage with further mutations introduced. This is similar to how nature did the sorting for us and humans evolved from a natural need to survive in unfamiliar situations. Since the concept uses Neural Networks, it has come to be known as Neuroevolution. Neuroevolution, in the simplest terms, is the process of “descent with modification” by which machines/systems evolve and get better at solving the problems they were built for. Backpropagation to DNN: The Evolution Neural networks are made up of nodes. These nodes function like neurons in the human brain that receive a set of inputs and generate a response based on the type, intensity, frequency etc of stimuli. A single node looks like the below illustration: An algorithm can be viewed as a node. With backpropagation, the algorithm is modified in an iterative manner - where the error generated after each pass, is fed back to the system. The algorithms (nodes) responsible for higher error contribution are identified and assigned less weight in the next pass. Thus, backpropagation is a way to assign appropriate weights to nodes by calculating error contributions of individual nodes. These nodes, when combined in different layers, form the structure of Deep Neural Networks. Deep Neural networks have separate input and output layers and a middle layer of hidden nodes which form the core of DNN. This hidden layer consists of multiple nodes like the following. In case of DNNs, as before in each iteration, the weight of the nodes are adjusted based on their accuracy. The number of iterations is a factor that varies for each DNN. As explained earlier, the system without any external stimuli continues to improve on its own. Now, where have we seen this before? Of course, this looks a lot like a simplified, miniature version of evolution! Unfit nodes are culled by reducing the weight they have in the overall output, and the ones with favorable results are encouraged, just like the natural selection. However, the only thing that is missing from this is the mutation and the ability to process mutation. This is where we introduce the mutations in the successful algorithms and let them evolve on their own. Backpropagation in DNNs doesn’t change the algorithm or it’s approach, it merely increases or decreases the algorithm’s overall contribution to the desired result. Forcing random mutations of neural and deep neural networks and then letting these mutations take shape as these neural networks together try to solve a given problem seem pretty straightforward. The point where everything starts getting messy is when different layers or neural networks start solving the given problem in their own pre-defined way. One of two things may then happen: The neural networks behave in self-contradiction and stall the overall problem-solving process. The system as such cannot take any decision and becomes dormant.     The neural networks are in some sort of agreement regarding a decision. The decision itself might be correct or incorrect. Both scenarios present us with dilemmas - how to restart a stalled process and how to achieve better decision making capability. The solution to both of situations lies in enabling the DNNs to rectify themselves first by choosing the correct algorithms. And then by mutating them with an intention to allow them to evolve and reach a decision toward achieving greater accuracy.   Here’s a look at some popular implementations of this idea. Neuroevolution in flesh and blood Cutting edge AI research giants like OpenAI backed by Elon Musk and Google DeepMind have taken the concept of neuroevolution and applied them to a bunch of deep neural networks. Both aim to evolve these algorithms in a way that the smarter ones survive and eventually create better and faster models & systems. Their approaches are however starkly different. The Google implementation Google’s way is simple - It takes a number of algorithms, divides them into groups and assigns one particular task to all. The algorithms that fare better at solving these problems are then chosen for the next stage, much like the reward and punishment system in reinforcement learning. However, the difference here is that the faster algorithms are not just chosen for the next step, but their models and parameters are tweaked slightly -  this is our way of introducing a mutation into the successful algorithms. These minor mutations then play out as these modified algorithms try to solve the given problem. Again, the better ones remain and the rest are culled out. This way, the algorithms themselves find a way to perform better and better until they are reasonably close to the desired result. The most important advantage of this process is that the algorithms keep track of their evolution process as they get smarter. A major limitation of Google’s approach is that the time taken for performing these complex computations is too high, hence the result takes time to show. Also, once the mutation kicks in, their behavior is not controlled externally - i.e., quite literally they can go berserk because of the resulting mutation - which means the process can fail even at an advanced stage. The OpenAI implementation Let’s contrast this with OpenAI’s master-worker approach to neuroevolution. OpenAI used a set of nearly 1440 algorithms to play the game of Atari and submit their scores to the master algorithm. Then, the algorithms with better scores were chosen and given a mutation and put back into the same process. In more abstract terms, the OpenAI method looks like this. A set of worker algorithms are given a certain complex problem to solve. The best scores are passed on to the master algorithm. The better algorithms are then mutated and set to perform the same tasks. The scores are again recorded and passed on to the master algorithm. This happens through multiple iterations. The master algorithm progressively eliminates the chance of failure since the master algorithm knows which algorithms to employ when given a certain problem. However, it does not know the road to success as it has access only to the final scores and not how those scores were achieved. The advantage of this approach is that better results are guaranteed, there are no cases of decision conflict and the system stalling. The flip side is that this system only knows its way through the given problem. All this effort to evolve the system to a better one will have to be repeated for a similar but different problem. The process is therefore cumbersome and lengthy. The Future with Neuroevolution Human evolution has taken millions of years to reach where we are today. Evolving AI and enabling them to pass the Turing test, or to further make them smart enough to pass a university entrance exam will require significant improvement from the current crop of AI. Amazon’s Alexa and Apple’s Siri are mere digital assistants. If we want AI driven smart systems with seamless integration of AI into our everyday life, algorithms with evolutionary characteristics are a must. Neuroevolution might hold the secret to inventing smart AIs that can ultimately propel human civilization to greater heights of development and advancement. “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers...They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control." - Alan Turing
Read more
  • 0
  • 0
  • 3211

article-image-what-we-learned-oracle-openworld-2017
Amey Varangaonkar
06 Oct 2017
5 min read
Save for later

What we learned from Oracle OpenWorld 2017

Amey Varangaonkar
06 Oct 2017
5 min read
“Amazon’s lead is over.” These famous words by the Oracle CTO Larry Ellison in the Oracle OpenWorld 2016 garnered a lot of attention, as Oracle promised their customers an extensive suite of cloud offerings, and offered a closer look at their second generation IaaS data centers. In the recently concluded OpenWorld 2017, Oracle continued on their quest to take on AWS and other major cloud vendors by unveiling a  host of cloud-based products and services. Not just that, they have  juiced these offerings up with Artificial Intelligence-based features, in line with all the buzz surrounding AI. Key highlights from the Oracle OpenWorld 2017 Autonomous Database Oracle announced a totally automated, self-driving database that would require no human intervention for managing or fine-tuning the database. Using machine learning and AI to eliminate human error, the new database guarantees 99.995% availability. While taking another shot at AWS, Ellison promised in his keynote that customers moving from Amazon’s Redshift to Oracle’s database can expect a 50% cost reduction. Likely to be named as Oracle 18c, this new database is expected to be shipped across the world by December 2017. Oracle Blockchain Cloud Service Oracle joined IBM in the race to dominate the Blockchain space by unveiling its new cloud-based Blockchain service. Built on top of the Hyperledger Fabric project, the service promises to transform the way business is done by offering secure, transparent and efficient transactions. Other enterprise-critical features such as provisioning, monitoring, backup and recovery are also some of the standard features which this service will offer to its customers. “There are not a lot of production-ready capabilities around Blockchain for the enterprise. There [hasn’t been] a fully end-to-end, distributed and secure blockchain as a service,” Amit Zavery, Senior VP at Oracle Cloud. It is also worth remembering that Oracle joined the Hyperledger consortium just two months ago, and the signs of them releasing their own service were there already. Improvements to Business Management Services The new features and enhancements introduced for the business management services were one of the key highlights of the OpenWorld 2017. These features now empower businesses to manage their customers better, and plan for the future with better organization of resources. Some important announcements in this area were: Adding AI capabilities to its cloud services - The Oracle Adaptive Intelligent Apps will now make use of the AI capabilities to improve services for any kind of business Developers can now create their own AI-powered Oracle applications, making use of deep learning Oracle introduced AI-powered chatbots for better customer and employee engagement New features such as enhanced user experience in the Oracle ERP cloud and improved recruiting in the HR cloud services were introduced Key Takeaways from Oracle OpenWorld 2017 With the announcements, Oracle have given a clear signal that they’re to be taken seriously. They’re already buoyed by a strong Q1 result which saw their revenue from cloud platforms hit $1.5 billion, indicating a growth of 51% as compared to Q1 2016, Here are some key takeaways from the OpenWorld 2017, which are underlined by the aforementioned announcements: Oracle undoubtedly see cloud as the future, and have placed a lot of focus on the performance of their cloud platform. They’re betting on the fact that their familiarity with the traditional enterprise workload will help them win a lot more customers - something Amazon cannot claim. Oracle are riding on the AI wave and are trying to make their products as autonomous as possible - to reduce human intervention and human error, to some extent. With enterprises looking to cut costs wherever possible, this could be a smart move to attract more customers. The autonomous database will require Oracle to automatically fine-tune, patch, and upgrade its database, without causing any downtime. It will be interesting to see if the database can live up to its promise of ‘99.995% availability’. Is the role of Oracle DBAs going to be at risk, due to the automation? While it is doubtful that they will be out of jobs, there is bound to be a significant shift in their day to day operations. It is speculated that the DBAs would require to spend less time on the traditional administration tasks such as fine-tuning, patching, upgrading, etc. and instead focus on efficient database design, setting data policies and securing the data. Cybersecurity has been a key theme in Ellison’s keynote and the OpenWorld 2017 in general. As enterprise Blockchain adoption grows, so does the need for a secure, efficient digital transaction system. Oracle seem to have identified this opportunity, and it will be interesting to see how they compete with the likes of IBM and SAP to gain major market share. Oracle’s CEO Mark Hurd has predicted that Oracle can win the cloud wars, overcoming the likes of Amazon, Microsoft and Google. Judging by the announcements in the OpenWorld 2017, it seems like they may have a plan in place to actually pull it off. You can watch highlights from the Oracle OpenWorld 2017 on demand here. Don’t forget to check out our highly popular book Oracle Business Intelligence Enterprise Edition 12c, your one-stop guide to building an effective Oracle BI 12c system.  
Read more
  • 0
  • 0
  • 2773
article-image-what-is-streaming-analytics-and-why-is-it-important
Amey Varangaonkar
05 Oct 2017
5 min read
Save for later

Say hello to Streaming Analytics

Amey Varangaonkar
05 Oct 2017
5 min read
In this data-driven age, businesses want fast, accurate insights from their huge data repositories in the shortest time span — and in real time when possible. These insights are essential — they help businesses understand relevant trends, improve their existing processes, enhance customer satisfaction, improve their bottom line, and most importantly, build, and sustain their competitive advantage in the market.   Doing all of this is quite an ask - one that is becoming increasingly difficult to achieve using just the traditional data processing systems where analytics is limited to the back-end. There is now a burning need for a newer kind of system where larger, more complex data can be processed and analyzed on the go. Enter: Streaming Analytics Streaming Analytics, also referred to as real-time event processing, is the processing and analysis of large streams of data in real-time. These streams are basically events that occur as a result of some action. Actions like a transaction or a system failure, or a trigger that changes the state of a system at any point in time. Even something as minor or granular as a click would then constitute as an event, depending upon the context. Consider this scenario - You are the CTO of an organization that deals with sensor data from wearables. Your organization would have to deal with terabytes of data coming in on a daily basis, from thousands of sensors. One of your biggest challenges as a CTO would be to implement a system that processes and analyzes the data from these sensors as it enters the system. Here’s where streaming analytics can help you by giving you the ability to derive insights from your data on the go. According to IBM, a streaming system demonstrates the following qualities: It can handle large volumes of data It can handle a variety of data and analyze it efficiently — be it structured or unstructured, and identifies relevant patterns accordingly It can process every event as it occurs unlike traditional analytics systems that rely on batch processing Why is Streaming Analytics important? The humongous volume of data that companies have to deal with today is almost unimaginable. Add to that the varied nature of data that these companies must handle, and the urgency with which value needs to be extracted from this data - it all makes for a pretty tricky proposition. In such scenarios, choosing a solution that integrates seamlessly with different data sources, is fine-tuned for performance, is fast, reliable, and most importantly one that is flexible to changes in technology, is critical. Streaming analytics offers all these features - thereby empowering organizations to gain that significant edge over their competition. Another significant argument in favour of streaming analytics is the speed at which one can derive insights from the data. Data in a real-time streaming system is processed and analyzed before it registers in a database. This is in stark contrast to analytics on traditional systems where information is gathered, stored, and then the analytics is performed. Thus, streaming analytics supports much faster decision-making than the traditional data analytics systems. Is Streaming Analytics right for my business? Not all organizations need streaming analytics, especially those that deal with static data or data that hardly change over longer intervals of time, or those that do not require real-time insights for decision-making.   For instance, consider the HR unit of a call centre. It is sufficient and efficient to use a traditional analytics solution to analyze thousands of past employee records rather than run it through a streaming analytics system. On the other hand, the same call centre can find real value in implementing streaming analytics to something like a real-time customer log monitoring system. A system where customer interactions and context-sensitive information are processed on the go. This can help the organization find opportunities to provide unique customer experiences, improve their customer satisfaction score, alongside a whole host of other benefits. Streaming Analytics is slowly finding adoption in a variety of domains, where companies are looking to get that crucial competitive advantage - sensor data analytics, mobile analytics, business activity monitoring being some of them. With the rise of Internet of Things, data from the IoT devices is also increasing exponentially. Streaming analytics is the way to go here as well. In short, streaming analytics is ideal for businesses dealing with time-critical missions and those working with continuous streams of incoming data, where decision-making has to be instantaneous. Companies that obsess about real-time monitoring of their businesses will also find streaming analytics useful - just integrate your dashboards with your streaming analytics platform! What next? It is safe to say that with time, the amount of information businesses will manage is going to rise exponentially, and so will the nature of this information. As a result, it will get increasingly difficult to process volumes of unstructured data and gain insights from them using just the traditional analytics systems. Adopting streaming analytics into the business workflow will therefore become a necessity for many businesses. Apache Flink, Spark Streaming, Microsoft's Azure Stream Analytics, SQLstream Blaze, Oracle Stream Analytics and SAS Event Processing are all good places to begin your journey through the fleeting world of streaming analytics. You can browse through this list of learning resources from Packt to know more. Learning Apache Flink Learning Real Time processing with Spark Streaming Real Time Streaming using Apache Spark Streaming (video) Real Time Analytics with SAP Hana Real-Time Big Data Analytics
Read more
  • 0
  • 0
  • 3944

article-image-what-coding-service
Antonio Cucciniello
02 Oct 2017
4 min read
Save for later

What is coding as a service?

Antonio Cucciniello
02 Oct 2017
4 min read
What is coding as a service? If you want to know what coding as a service is, you have to start with Artificial intelligence. Put simply, coding-as-a-service is using AI to build websites, using your machine to write code so you don't have to. The challenges facing engineers and programmers today In order to give you a solid understanding of what coding as a service is, you must understand where we are today. Typically, we have programs that are made by software developers or engineers. These programs are usually created to automate a task or make tasks easier. Think things that typically speed up processing or automate a repetitive task. This is, and has been, extremely beneficial. The gained productivity from the automated applications and tasks allows us, as humans and workers, to spend more time on creating important things and coming up with more ground breaking ideas. This is where Artificial Intelligence and Machine Learning come into the picture. Artificial intelligence and coding as a service Recently, with the gains in computing power that have come with time and breakthroughs, computers have became more and more powerful, allowing for AI applications to arise in more common practice. At this point today, there are applications that allow for users to detect objects in images and videos in real-time, translate speech to text, and even determine the emotions in the text sent by someone else. For an example of Artificial Intelligence Applications in use today, you may have used an Amazon Alexa or Echo Device. You talk to it, and it can understand your speech, and it will then complete a task based off your speech. Previously, this was a task given to only humans (the ability to understand speech.). Now with advances, Alexa is capable of understanding everything you say,given that it is "trained" to understand it. This development, previously only expected of humans, is now being filtered through to technology. How coding as a service will automate boring tasks Today, we have programmers that write applications for many uses and make things such as websites for businesses. As things progress and become more and more automated, that will increase programmer’s efficiency and will reduce the need for additional manpower. Coding as a service, other wise known as Caas, will result in even fewer programmers needed. It mixes the efficiencies we already have with Artificial Intelligence to do programming tasks for a user. Using Natural Language Processing to understand exactly what the user or customer is saying and means, it will be able to make edits to websites and applications on the fly. Not only will it be able to make edits, but combined with machine learning, the Caas can now come up with recommendations from past data to make edits on its own. Efficiency-wise, it is cheaper to own a computer than it is to pay a human especially when a computer will work around the clock for you and never get tired. Imagine paying an extremely low price (one than you might already pay to get a website made) for getting your website built or maybe your small application created. Conclusion Every new technology comes with pros and cons. Overall, the number of software developers may decrease, or, as a developer, this may free up your time from more menial tasks, and enable you to further specialize and broaden your horizons. Artificial Intelligence programs such as Coding as a Service could be spent doing plenty of the underlying work, and leave some of the heavier loading to human programmers. With every new technology comes its positives and negatives. You just need to use the postives to your advantage!
Read more
  • 0
  • 0
  • 12087