Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - Data

281 Articles
article-image-machine-learning-apis-for-google-cloud-platform
Amey Varangaonkar
28 Jun 2018
7 min read
Save for later

Machine learning APIs for Google Cloud Platform

Amey Varangaonkar
28 Jun 2018
7 min read
Google Cloud Platform (GCP) is considered to be one of the Big 3 cloud platforms among Microsoft Azure and AW. GCP is widely used cloud solutions supporting AI capabilities to design and develop smart models to turn your data into insights at a cheap, affordable cost. The following excerpt is taken from the book 'Cloud Analytics with Google Cloud Platform' authored by Sanket Thodge. GCP offers many machine learning APIs, among which we take a look at the 3 most popular APIs: Cloud Speech API A powerful API from GCP! This enables the user to convert speech to text by using a neural network model. This API is used to recognize over 100 languages throughout the world. It can also support filter of unwanted noise/ content from a text, under various types of environments. It supports context-awareness recognition, works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise Robustness, Inappropriate Content Filtering and supports for integration with other APIs of GCP.  The architecture of the Cloud Speech API is as follows: In other words, this model enables speech to text conversion by ML. The components used by the Speech API are: REST API or Google Remote Procedure Call (gRPC) API Google Cloud Client Library JSON API Python Cloud DataLab Cloud Data Storage Cloud Endpoints The applications of the model include: Voice user interfaces Domotic appliance control Preparation of structured documents Aircraft / direct voice outputs Speech to text processing Telecommunication It is free of charge for 15 seconds per usage, up to 60 minutes per month. More than that will be charged at $0.006 per usage. Now, as we have learned about the concepts and the applications of the model, let's learn some use cases where we can implement the model: Solving crimes with voice recognition: AGNITIO, A voice biometrics specialist partnered with Morpho (Safran) to bring Voice ID technology into its multimodal suite of criminal identification products. Buying products and services with the sound of your voice: Another most popular and mainstream application of biometrics, in general, is mobile payments. Voice recognition has also made its way into this highly competitive arena. A hands-free AI assistant that knows who you are: Any mobile phone nowadays has voice recognition software in the form of AI machine learning algorithms. Cloud Translation API Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become the main focus of NLP group for many years. MT deals with translating text from the source language to text in the target language. Cloud Translation API provides a graphical user interface to translate an inputted string of a language to targeted language, it’s highly responsive, scalable and dynamic in nature. This API enables translation among 100+ languages. It also supports language detection automatically with accuracy. It provides a feature to read a web page contents and translate to another language, and need not be text extracted from a document. The Translation API supports various features such as programmatic access, text translation, language detection, continuous updates and adjustable quota, and affordable pricing. The following image shows the architecture of the translation model:  In other words, the cloud translation API is an adaptive Machine Translation Algorithm. The components used by this model are: REST API Cloud DataLab Cloud data storage Python, Ruby Clients Library Cloud Endpoints The most important application of the model is the conversion of a regional language to a foreign language. The cost of text translation and language detection is $20 per 1 million characters. Use cases Now, as we have learned about the concepts and applications of the API, let's learn two use cases where it has been successfully implemented: Rule-based Machine Translation Local Tissue Response to Injury and Trauma We will discuss each of these use cases in the following sections. Rule-based Machine Translation The steps to implement rule-based Machine Translation successfully are as follows: Input text Parsing Tokenization Compare the rules to extract the meaning of prepositional phrase Find word of inputted language to word of the targeted language Frame the sentence of the targeted language Local tissue response to injury and trauma We can learn about the Machine Translation process from the responses of a local tissue to injuries and trauma. The human body follows a process similar to Machine Translation when dealing with injuries. We can roughly describe the process as follows: Hemorrhaging from lesioned vessels and blood clotting Blood-borne physiological components, leaking from the usually closed sanguineous compartment, are recognized as foreign material by the surrounding tissue since they are not tissue-specific Inflammatory response mediated by macrophages (and more rarely by foreign-body giant cells) Resorption of blood clot Ingrowth of blood vessels and fibroblasts, and the formation of granulation tissue Deposition of an unspecific but biocompatible type of repair (scar) tissue by fibroblasts Cloud Vision API Cloud Vision API is powerful image analytic tool. It enables the users to understand the content of an image. It helps in finding various attributes or categories of an image, such as labels, web, text, document, properties, safe search, and code of that image in JSON. In labels field, there are many sub-categories like text, line, font, area, graphics, screenshots, and points. How much area of graphics involved, text percentage, what percentage of empty area and area covered by text, is there any image partially or fully mapped in web are included web contents. The document consists of blocks of the image with detailed description, properties show that the colors used in image is visualized. If any unwanted or inappropriate content is removed from the image through safe search. The main features of this API are label detection, explicit content detection, logo and landmark detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported for many languages. It does not support face recognition system. The architecture for the Cloud Vision API is as follows: We can summarize the functionalities of the API as extracting quantitative information from images, taking the input as an image and the output as numerics and text. The components used in the API are: Client Library REST API RPC API OCR Language Support Cloud Storage Cloud Endpoints Applications of the API include: Industrial Robotics Cartography Geology Forensics and Military Medical and Healthcare Cost: Free of charge for the first 1,000 units per month; after that, pay as you go. Use cases This technique can be successfully implemented in: Image detection using an Android or iOS mobile device Retinal Image Analysis (Ophthalmology) We will discuss each of these use cases in the following topics. Image detection using Android or iOS mobile device Cloud Vision API can be successfully implemented to detect images using your smartphone. The steps to do this are simple: Input the image Run the Cloud Vision API Executes methods for detection of Face, Label, Text, Web and Document properties Generate the response in the form of phrase or string Populate the image details as a text view Retinal Image Analysis – ophthalmology Similarly, the API can also be used to analyze retinal images. The steps to implement this are as follows: Input the images of an eye Estimate the retinal biomarkers Do the process to remove the effected portion without losing necessary information Identify the location of specific structures Identify the boundaries of the object Find similar regions in two or more images Quantify the image with retinal portion damage You can learn a lot more about the machine learning capabilities of GCP on their official documentation page. If you found the above excerpt useful, make sure you check out our book 'Cloud Analytics with Google Cloud Platform' for more information on why GCP is a top cloud solution for machine learning and AI. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Google announce the largest overhaul of their Cloud Speech-to-Text  
Read more
  • 0
  • 0
  • 6274

article-image-ai-cold-war-between-china-and-the-usa
Neil Aitken
28 Jun 2018
6 min read
Save for later

The New AI Cold War Between China and the USA

Neil Aitken
28 Jun 2018
6 min read
The Cold War between the United States and Russia ended in 1991. However, considering the ‘behind the scenes’ behavior of the world’s two current Super Powers – China and the USA, another might just be beginning. This time around, many believe that the real battle doesn’t relate to the trade deficit between the two countries, despite new stories detailing the escalation of trade tariffs. In the next decade and a half, the real battle will take place between China and the USA in the technology arena, specifically, in the area of Artificial Intelligence or AI. China’s not shy about it’s AI ambitions China has made clear its goals when it comes to AI. It has publicly announced its plan to be the world leader in Artificial Intelligence by 2030. The country has learned a hard lesson, missing out on previous tech booms, notably, in the race for internet supremacy early this century. Now, they are taking a far more proactive stance. The AI market is estimated to be worth $150 billion per year by 2030, slightly over a decade from now, and China has made very clear public statements that the country wants it all. The US, in contrast has a number of private companies striving to carve out a leadership position in AI but no holistic policy. Quite the contrary, in fact. Trumps government say, “There is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes.” What makes China so dangerous as an AI Threat ? China’s background and current circumstance gives them a set of valuable strategic advantages when it comes to AI. AI solutions are based, primarily, on two things. First, of critical importance is the amount of data available to ‘train’ an AI algorithm and the relative ease or difficulty of obtaining access to it. Secondly, the algorithm which sorts the data, looking for patterns and insights, derived from research, which are used to optimize the AI tools which interpret it. China leads the world on both fronts. China has more data: China’s population is 4 times larger than the US’s giving them a massive data advantage. China has a total of 730 million daily internet users and 704 million smartphone mobile internet users. Each of the connected individuals uses their phone, laptop or tablet online each day. Those digital interactions leave logs of location, time, action performed and many other variables. In sum then, China’s huge population is constantly generating valuable data which can be mined for value. Chinese regulations give public and private agencies easier access to this data: Few countries have exemplary records when it comes to human rights. Both Australia, and the US, for example, have been rebuked by the UN for their treatment of immigration in recent years. Questions have been asked of China too. Some suggest that China’s centralized government, and alleged somewhat shady history when it comes to human rights means they can provide internet companies with more data, more easily, than their private equivalents in the US could dream of. Chinese cybersecurity laws require companies doing business in the country to store their data locally. The government has placed one state representative on the board of each of their major tech companies, giving them direct, unfettered central government influence in the strategic direction and intent of those companies, especially when it comes to coordinating the distribution of the data they obtain. In the US, data leakage is one of the most prominent news stories of 2018. Given Facebook’s presentation to congress around the Facebook/Cambridge Analytica data sharing scandal, it would be hard to claim that US companies have access to data outside each company competing to evolve AI solutions fastest. It’s more secretive: China protects its advantage by limiting other countries’ access to its findings / information related to AI. At the same time, China takes advantage of the open publication of cutting edge ideas generated by scientists in other areas of the world. How China is doubling down on their natural advantage in AI solution development A number of metrics show China’s growing advantage in the area. China is investing more money in the area and leading the world in the number of university led research papers on AI that they’re publishing. China is investing more money in AI than the USA. They overtook the US in AI funds allocation in 2015 and have been increasing investment in the area since. Source: Wall Street Journal China now performs more research in to AI than the US – as measured by the number of published scientific peer reviewed journals. Source: HBR Why ‘Network Effects’ will decide the ultimate winner in the AI Arms Race You won’t see evidence of a Cold War in the behaviors of World Leaders. The handshakes are firm and the visits are cordial. Everybody smiles when they meet at the G8. However, a look behind the curtain clearly shows a 21st Century arms race underway, being led by investments  related to AI in both countries. Network effects ensure that there is often only one winner in a fight for technological supremacy. Whoever has the ‘best product’ for a given application, wins the most users. The data obtained from those users’ interactions with the tool is used to hone its performance. Thus creating a virtuous circle. The result is evident in almost every sphere of tech: Network effects explain why most people use only Google, why there’s only one Facebook and how Netflix has overtaken cable TV in the US as the primary source of video entertainment. Ultimately, there is likely to be only one winner in the war surrounding AI, too. From a military perspective, the advantage China has in its starting point for AI solution development could be the deciding factor. As we’ve seen, China has more people, with more devices, generating more data. That is likely to help the country develop workable AI solutions faster. They ingest the hard won advantages that US data scientists develop and share – but do not share their own. Finally, they simply outspend and out-research the US, investing more in AI than any other country. China’s coordinated approach outpaces the US’s market based solution with every step. The country with the best AI solutions for each application will gain a ‘Winner Takes All’ advantage and the winning hand in the $300 billion game of AI market ownership. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you    
Read more
  • 0
  • 0
  • 5711

article-image-ubers-kepler-gl-an-open-source-toolbox-for-geospatial-analysis
Pravin Dhandre
28 Jun 2018
4 min read
Save for later

Uber's kepler.gl, an open source toolbox for GeoSpatial Analysis

Pravin Dhandre
28 Jun 2018
4 min read
Geography Visualization, also called as Geovisualization plays a pivotal role in areas like cartography, geographic information systems, remote sensing and global positioning systems. Uber, a peer-to-peer transportation network company headquartered at California believes in data-driven decision making and hence keeps developing smart frameworks like deck.gl for exploring and visualizing advanced geospatial data at scale. Uber strives to make the data web-based and shareable in real-time across their teams and customers. Early this month, Uber surprised the geospatial market with its newly open-source toolbox, kepler.gl, a geoanalytics tool to gain quick insights from geospatial data with amazing and intuitive visualizations. What’s exactly Kepler.gl is? kepler.gl is a visualization-rich web platform, developed on top of deck.gl, a WebGL-powered data visualization library providing real-time visual analytics of millions of geolocation points. The platform provides visual exploration of geographical data sets along with spatial aggregation of all data points collected. The platform is said to be data-agnostic with a single interface to convert your data into insightful visualizations. https://www.youtube.com/watch?v=i2fRN4e2s0A The platform is very user-friendly where one can just drag the CSV or the GeoJSON files and drop them into the browser to visualize the dataset more intuitively. The platform is supported with different map layers, filtering option, aggregation feature through which you can get the final visualization in an animated format or like a video. The usability of features is so high that you can apply all the metrics available to your data points without much of a hassle. The web platform exhibits high performance where you can get insights from your spatial data in less than 10 minutes and that too in a single window. Another advantage of this framework is it does not involve any sort of coding and hence non-technical users can also reap the benefits by churn valuable insights from the data points. The platform is also equipped with some advanced, complex features such as 2D cartographic plane,a separate dimension for altitude, visibility of height of hexagon and grids. The users seem happy with the new height feature which helps them detect abnormalities and illicit traits in an aggregated map. With the filtering menu, the analysts and engineers can compare their data and have a granular look at their data points. This option also helps in reading the histogram well and one can easily detect outliers and make their dataset more reliable. It  has a feature to add playback to time series data points which makes getting useful information of real time location systems easy. The team at Uber looks at this toolbox with a long-term vision where they are planning to keep adding new features and enhancements to make it highly functional and a single-click visualization dashboard. The team has already announced that they would be powering it up with two major enhancements to the current functionality in next couple of months. They would add support on, More robust exploration: There will be interlinkage between charts and maps, and support for custom charts, maps and widgets like the renowned BI tool Tableau through which it will facilitate analytics teams to unveil deeper insights. Addition of newer geo-analytical capabilities: To support massive datasets, there will be added features on data operations such as polygon aggregation, union of data points, operations like joining and buffering. Companies across different verticals such as Airbnb, Atkins Global, Cityswifter, Mapbox have found great value in kepler.gl offerings and are looking towards engineering their products to leverage this framework. The visualization specialists at these companies have already praised Uber for building such a simple yet fast platform with remarkable capabilities. To get started with kepler.gl, read the documentation available at Github and start creating visualizations and enhance your geospatial data analysis. Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Data Visualization with ggplot2
Read more
  • 0
  • 0
  • 6822
Banner background image

article-image-7-popular-applications-of-artificial-intelligence-in-healthcare
Guest Contributor
26 Jun 2018
5 min read
Save for later

7 Popular Applications of Artificial Intelligence in Healthcare

Guest Contributor
26 Jun 2018
5 min read
With the advent of automation, artificial intelligence(AI), and machine learning, we hear about their applications regularly in news across industries. This has been especially true for healthcare where various hospitals, health insurance companies, healthcare units, etc. have been impacted in more substantial and concrete ways by AI when compared to other industries. In the recent years, healthcare startups and life science organizations have ventured into Artificial Intelligence technology and are one of the most heavily invested areas by VCs. Various organizations with ties to healthcare are leveraging the advances in artificial intelligence algorithms for remote patient monitoring, medical imaging and diagnostics, and implementing newly developed sophisticated methods, and applications into the system. Let’s explore some of the most popular AI applications which have revamped the healthcare industry. Proper maintenance and management of medical records Assembling, analyzing, and maintaining medical information and records is one of the most commonly used applications of AI. With the coming of digital automation, robots are being used for collecting and tracing data for proper data management and analysis. This has brought down manual labor to a considerable extent. Computerized medical consultation and treatment path The existence of medical consultation apps like DocsApp allows a user to talk to experienced and specialist doctors on chat or call directly from their phone in a private and secure manner. Users can report their symptoms into the app and this ensures the users are connected to the right specialist physicians as per the user’s medical history. This has been made possible due to the existence of AI systems. AI also aids in treatment design like analyzing data, making notes and reports from a patient’s file, thereby helping in choosing the right customized treatment as per the patient’s medical history. Eliminates monotonous manual labor Various medical tasks like analyzing X-Ray reports, test reports, CT scans and other common tasks can be executed by robots and other mechanical devices more accurately. Radiology is one such discipline wherein human supervision and control have dropped to a substantial level due to the extensive use of AI. Aids in drug manufacture and creation Generally, billions of dollars are spent on developing pharmaceuticals through clinical trials and they take almost a decade or two to manufacture a life-saving drug. But now, with the arrival of AI, the entire drug creation procedure has been simplified and has become pretty reasonable as well. Even in the recent outbreak of the Ebola virus, AI was used for drug discovery, to redesign solutions and to scan the current existing medicines to eradicate the plague. Regular health monitoring In the current era of digitization, there are certain wearable health trackers – like Garmin, Fitbit, etc. which can monitor your heart rate and activity levels. These devices help the user to keep a close check on their health by setting up their exercise plan, or reminding them to stay hydrated. All this information can also be shared with your physician to track your current health status through AI systems. Helps in the early and accurate detection of medical disorders AI helps in spotting carcinogenic and cardiovascular disorders at an early stage and also aids in predicting health issues that people are likely to contract due to hereditary or genetic reasons. Enhances medical diagnosis and medication management Medical diagnosis and medication management are the ultimate data-based problems in the healthcare industry. IBM’s Watson, a deep learning system has simplified medical investigation and is being applied to oncology, specifically for cancer diagnosis. Previously, human doctors used to collect patient data, research on it and conduct clinical trials. But with AI, the manual efforts have reduced considerably. For medication management, certain apps have been developed to monitor the medicines taken by a patient. The cellphone camera in conjunction with AI technology to check whether the patients are taking the medication as prescribed. Further, this also helps in detecting serious medical problems and tracking patients medicine adaptability and participants behavior in certain scientific trials. To conclude, we can connote that we are gradually embarking on the new era of cognitive technology with the power of AI-based systems. In the coming years, we can expect AI to transform every area of the healthcare industry that it brushes up with. Experts are constantly looking for ways and means to organize the existing structure and power up healthcare on the basis of new AI technology. The ultimate goals being to improve patient experience, build a better public health management and reduce costs by automating manual labor. Author Bio Maria Thomas is the Content Marketing Manager and Product Specialist at GreyCampus with eight years rich experience on professional certification courses like PMI- Project Management Professional, PMI-ACP, Prince2, ITIL (Information Technology Infrastructure Library), Big Data, Cloud, Digital Marketing and Six Sigma. Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0
Read more
  • 0
  • 0
  • 5042

article-image-top-14-cryptocurrency-trading-bots
Guest Contributor
21 Jun 2018
9 min read
Save for later

Top 14 Cryptocurrency Trading Bots - and one to forget

Guest Contributor
21 Jun 2018
9 min read
Men in rags became millionaires and rich people bite the dust within minutes, thanks to crypto currencies. According to a research, over 1500 crypto currencies are being traded globally and with over 6 million wallets, proving that digital currency is here not just to stay but to rule. The rise and fall of crypto market isn’t hidden from anyone but the catch is—cryptocurrency still sells like a hot cake. According to Bill Gates, “The future of money is digital currency”. With thousands of digital currencies rolling globally, crypto traders are immensely occupied and this is where cryptocurrency trading bots come into play. They ease out the currency trade and research process that results in spending less effort and earning more money not to mention the hours saved. According to Eric Schmidt, ex CEO of Google, “Bitcoin is a remarkable cryptographic achievement and the ability to create something that is not duplicable in the digital world has enormous value.” The crucial part is - whether the crypto trading bot is dependable and efficient enough to deliver optimum results within crunch time. To make sure you don't miss an opportunity to chip in cash in your digital wallet, here are the top 15 crypto trading bots ranked according to the performance: 1- Gunbot Gunbot is a crypto trading bot that boasts of detailed settings and is fit for beginners as well as professionals. Along with making custom strategies, it comes with a“Reversal Trading” feature. It enables continuous trading and works with almost all the exchanges (Binance, Bittrex, GDAX, Poloniex, etc). Gunbot is backed by thousands of users that eventually created an engaging and helpful community. While Gunbot offers different packages with price tags of 0.02 to 0.15 BTC, you can always upgrade them. The bot comes with a lifetime license and is constantly upgraded. Haasbot Hassonline created this cryptocurrency trading bot in January 2014. Its algorithm is very popular among cryptocurrency geeks. It can trade over 500 altcoins and bitcoins on famous exchanges such as BTCC, Kraken, Bitfinex, Huobi, Poloniex, etc. You need to put a little input of the currency and the bot will do all the trading work for you. Haasbot is customizable and has various technical indicator tools. The cryptocurrency trading bot also recognizes candlestick patterns. This immensely popular trading bot is priced between 0.12BTC and 0.32 BTC for three months. 3- Gekko Gekko is a cryptocurrency trading bot that supports over 18 Bitcoin exchanges including Bitstamp, Poloniex, Bitfinex, etc. This bot is a backtesting platform and is free for use. It is a full fledged open source bot that is available on the GitHub. Using this bot is easy as it comes with basic trading strategies. The webinterface of Gekko was written from scratch and it can run backtests, visualize the test results while you monitor your local data with it. Gekko updates you on the go using plugins for Telegram, IRC, email and several different platforms. The trading bot works great with all operating systems such as Windows, Linux and macOS. You can even run it on your Raspberry PI and cloud platforms. 4- CryptoTrader CyrptoTrader is a  cloud-based platform which allows users to create automated algorithmic trading programs in minutes. It is one of the most attractive crypto trading bot and you wont need to install any unknown software with this bot. A highly appreciated feature of CryptoTrader is its Strategy Marketplace where users can trade strategies. It supports major currency exchanges such as Coinbase, Bitstamp, BTCe and is supported for live trading and backtesting. The company claims its cloud based trading bots are unique as compared with the currently available bots in the market. 5- BTC Robot One of the very initial automated crypto trading bot, BTC Robot offers multiple packages for different memberships and software. It provides users with a downloadable version of Windows. The minimum robot plan is of $149. BTC Robot sets up quite easily but it is noted that its algorithms aren't great at predicting the markets. The user mileage in BTC Robot varies heavily leaving many with mediocre profits. With the trading bot’s fluctuating evaluation, the profits may go up or down drastically depending on the accuracy of algorithm. On the bright side the bot comes with a sixty day refund policy that makes it a safe buy. 6- Zenbot Another open source trading bot for bitcoin trading, Zenbot can be downloaded and its code can be modified too. This trading bot hasn't got an update in the past months but still, it is among one of the few bots that can perform high frequency trading while backing up multiple assets at a time. Zenbot is a lightweight artificially intelligent crypto trading bot and supports popular exchanges such as Kraken, GDAX, Poloniex, Gemini, Bittrex, Quadriga, etc. Surprisingly, according to the GitHub’s page, Zenbot’s version 3.5.15 bagged an ROI of 195% in just a mere period of three months. 7- 3Commas 3Commas is a famous cryptocurrency trading bot that works well with various exchanges including Bitfinex, Binance, KuCoin, Bittrex, Bitstamp, GDAX, Huiboi, Poloniex and YOBIT. As it is a web based service, you can always monitor your trading dashboard on desktop, mobile and laptop computers. The bot works 24/7 and it allows you to take-profit targets and set stop-loss, along with a social trading aspect that enables you to copy the strategies used by successful traders. ETF-Like feature allows users to analyze, create and back-test a crypto portfolio and pick from the top performing portfolios created by other people. 8- Tradewave Tradewave is a platform that enables users to develop their own cryptocurrency trading bots along with automated trading on crypto exchanges. The bot trades in the cloud and uses Python to write the code directly in the browser. With Tradewave, you don't have to worry about the downtime. The bot doesn't force you to keep your computer on 24x7 nor it glitches if not connected to the internet. Trading strategies are often shared by community members that can be used by others too. However, it currently supports very few cryptocurrency exchanges such as Bitstamp and BTC-E but more exchanges will be added in coming months. 9- Leonardo Leonardo is a cryptocurrency trading bot that supports a number of exchanges such as Bittrex, Bitfinex, Poloniex, Bitstamp, OKCoin, Huobi, etc. The team behind Leonardo is extremely active and new upgrades including plugins are in the funnel. Previously, it cost 0.5 BTC but currently, it is available for $89 with a license of single exchange. Leonardo boasts of two trading strategy bots including Ping Pong Strategy and Margin Maker Strategy. The first strategy enables users to set the buy and sell price leaving all of the other plans to the bot while the Margin Maker strategy can buy and sell on price adjusted according to the direction in the market. This trading bot stands out in terms of GUI. 10- USI Tech USI Tech is a trading bot that is majorly used for forex trading but it also offers BTC packages. While majority of trading bots require an initial setup and installation, USI uses a different approach and it isn't controlled by the users. Users are needed to buy-in from their expert mining and bitcoin trade connections and then, the USI Tech bot guarantees a daily profit from the transactions and trade. To earn one percent of the capital daily, customers are advised to choose feature rich plans.. 11- Cryptohopper Cryptohopper  is a 24/7 cloud based trading bot that means it doesn't matter  if you are on the computer or not. Its system enables users to trade on technical indicators with subscription to a signaler who sends buy signals. According to the Cryptohopper’s website, it is the first crypto trading bot that is integrated with professional external signals. The bot helps in leveraging bull markets and has a latest dashboard area where users can monitor and configure everything. The dashboard also includes a configuration wizard for the major exchanges including Bittrex, GDAX, Kraken,etc. 12- My Bitcoin Bot MBB is a team effort from Brad Sheridon and his proficient teammates who are experts of cryptocurrency investment. My Bitcoin Bot is an automated trading software that can be accessed by anyone who is ready to pay for it. While the monthly plan is of $39 a month, the yearly subscription for this auto-trader bot is available for of $297. My bitcoin bot comes with heaps of advantages such as unlimited technical support, free software updates, access to trusted brokers list, etc. 13- Crypto Arbitrager A standalone application that operates on a dedicated server, Crypto Arbitrager can leverage robots even when the PC is off. The developers behind this cryptocurrency trading bot claim that this software uses code integration of financial time series. Users can make money from the difference in rates of Litecoins and Bitcoins. By implementing the advanced strategy of hedge funds, the trading bot effectively manages savings of users regardless of the state of the cryptocurrency market. 14- Crypto Robot 365 Crypto Robot 365 automatically trades your digital currency. It buys and sells popular cryptocurrencies such as Ripple, Bitcoin, ethereum, Litecoin, Monero, etc. Rather than a signup fee, this platform charges its commision on a per trade basis. The platform is FCA-Regulated and offers a realistic achievable win ratio. According to the trading needs, users can tweak the system. Moreover, it has an established trading history and  it even offers risk management options. Down The Line While cryptocurrency trading is not a piece of cake, trading with currency bots may be confusing for many. The aforementioned trading bots are used by many and each is backed by years of extensive hard work. With reliability, trustworthiness, smartwork and proactiveness being top reasons for choosing any cryptocurrency trading bot, picking up a trading bot is a hefty task. I recommend you experiment with small amount of money first and if your fate gets to a shining start, pick the trading bot that perfectly suits your way of making money via cryptocurrency. About the Author Rameez Ramzan is a Senior Digital Marketing Executive of Cubix - mobile app development company.  He specializes in link building, content marketing, and site audits to help sites perform better. He is a tech geek and loves to dwell on tech news. Crypto-ML, a machine learning powered cryptocurrency platform Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Apple changes app store guidelines on cryptocurrency mining
Read more
  • 0
  • 1
  • 15167

article-image-computer-vision-is-an-expanding-market-heres-why
Aaron Lazar
12 Jun 2018
6 min read
Save for later

Computer vision is growing quickly. Here's why.

Aaron Lazar
12 Jun 2018
6 min read
Computer Vision is one of those technologies that has grown in leaps and bounds over the past few years. If you look back 10 years, it wasn’t the case, as CV was more a topic of academic interest. Now, however, computer vision is clearly a driver and benefactor of the renowned Artificial Intelligence. Through this article, we’ll understand the factors that have sparked the rise of Computer Vision. A billion $ market You heard it right! Computer Vision is a billion dollar market, thanks to the likes of Intel, Amazon, Netflix, etc investing heavily in the technology’s development. And from the way events are unfolding, the market is expected to hit a record $ 17 billion, by 2023. That’s at a cumulative growth rate of over 7% per year, from 2018 to 2023. Now this is a joint figure for both the hardware and software components related to Computer Vision. Under the spotlight Let’s talk a bit about a few companies that are already taking advantage of Computer Vision, and are benefiting from it. Intel There are several large organisations that are investing heavily in Computer Vision. Last year, we saw Intel invest $15 Billion towards acquiring Mobileye, an Israeli auto startup. Intel published its findings stating that the autonomous vehicle market itself would rise to $ 7 Trillion by 2050. The autonomous vehicle industry will be one of the largest implementers of computer vision technology. These vehicles will use Computer Vision to “see” their surroundings and communicate with other vehicles. Netflix Netflix on the other hand, is using Computer Vision for more creative purposes. With the rise of Netflix’s original content, the company is investing in Computer Vision to harvest static image frames directly from the source videos to provide a flexible source of raw artwork, which is used for digital merchandising. For example, within a single episode of Stranger Things, there are nearly 86k static video frames, that would had to have been analysed by human teams to identify the most appropriate stills to be featured. This meant first going through each of those 86k images, then understanding what worked for viewers of the previous episode and then applying the learning in the selection of future images. Need I estimate how long that would have taken to do? Now, Computer Vision performs this task seamlessly, with a much higher accuracy than that of humans. Pinterest Pinterest, the popular social networking application, sees millions of images, GIFs and other visuals shared every day. In 2017, they released an application feature callen Lens, that allows users to use their phone’s camera to search for similar looking decor, food and clothing, in the real world. Users can simply point their cameras at an image and Pinterest will show them similar styles and ideas. Recent reports reveal that Pinterest’s revenue has grown by a staggering 58%! National Surveillance in CCTV The world’s biggest AI startup, SenseTime, provides China with the world’s largest and most sophisticated CCTV network. With over 170 Mn CCTV cameras, the government authorities and police departments are able to seamlessly identify people. They perform this by wearing smart glasses, that have facial recognition capabilities. Bring this technology to Dubai and you’ve got a supercop in a supercar! The nation-wide surveillance project that’s named Skynet, began as early as 2005, although recent advances in AI have given it a boost. Reading through discussions like these is real fun. People used to quip that such “fancy” machines are only for the screen. If only they knew that such a machine would be a reality just a few years from then. Clearly, computer vision is one of the most highly valued commercial applications of machine learning and when integrated with AI, it’s an offer only a few can resist! Star Acquisitions that matter Several acquisitions have taken place in the field of Computer Vision in the past two years alone. The most notable of them being Intel’s acquisition of Movidius, to the tune of $400 Mn. Here are some of the others that have happened since 2016: Twitter acquires Magic Pony Technology for $150Mn Snap Inc acquires Obvious Engineering for $47 Mn Salesforce acquires Metamind for $32.8 Mn Google acquires Eyefluence for $21.6 Mn This shows the potential of the computer vision market and how big players are in the race to dive deep into the technology. Three little things driving computer vision I would say there are 3 clear growth factors that are contributing to the rise of Computer Vision: Deep Learning Advancements in Hardware Growth of the Datasets Deep Learning The advancements in the field of Deep Learning are bound to boost Computer Vision. Deep Learning algorithms are capable of processing tonnes of images, much more accurately than humans. Take Feature Extraction for example. The primary pain point with feature extraction is that you have to choose which features to look for in a given image. This becomes cumbersome and almost impossible when the number of classes you are trying to define, starts to grow. There are so many features, that you have to deal with a plethora of parameters, that have to be fine-tuned. Deep Learning simplifies this process for you. Advancements in Hardware With new hardware like GPUs capable of processing petabytes of data, algorithms are capable of running faster and more efficiently. This has led to the advancement in real-time processing and vision capabilities. Pioneering hardware manufacturers like NVIDIA and Intel are in a race to create more powerful and capable hardware to support deep learning capabilities for Computer Vision. Growth of the Datasets Training Deep Learning algorithms isn’t a daunting task anymore. There are plenty of open source data sets that you can choose from to train your algorithms. The more the data, the better is the training and accuracy. Here are some of the most notable data sets for computer vision. ImageNet with 15 million images, is a massive dataset Open Images has 9 million images Microsoft Common Objects in Context (COCO) has around 330K images CALTECH-101  has approximately 9,000 images Where tha money at? The job market for Computer Vision is on a rise too, with Computer Vision featuring at #3 on the list of top jobs in 2018, according to Indeed. Organisations are looking for Computer Vision Engineers who are well versed with writing efficient algorithms for handling large amounts of data. Source: Indeed.com So is it the right time to invest or perhaps learn Computer Vision? You bet it is! It’s clear that Computer Vision is a rapidly growing market and will have a sustained growth for the next few years. If you’re just planning to start out or even if you’re competent in using tools for Computer Vision, here are some resources to help you skill up with popular CV tools and techniques. Introducing Intel’s OpenVINO computer vision toolkit for edge computing Top 10 Tools for Computer Vision Computer Vision with Keras, Part 1
Read more
  • 0
  • 0
  • 6113
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-these-are-the-best-machine-learning-conferences-in-2018
Richard Gall
12 Jun 2018
8 min read
Save for later

7 of the best machine learning conferences for the rest of 2018

Richard Gall
12 Jun 2018
8 min read
We're just about half way through the year - scary, huh? But there's still time to attend a huge range of incredible machine learning conferences in 2018. Given that in this year's Skill Up survey developers working every field told us that they're interested in learning machine learning, it will certainly be worth your while (and money). We fully expect this year's machine learning conference circuit to capture the attention of those beyond the analytics world. The best machine learning conferences in 2018 But which machine learning conferences should you attend for the rest of the year? There's a lot out there, and they're not always that cheap. Let's take a look at 10 of the best machine learning conferences for the rest of this year. AI Summit London When and where? June 12-14 2018, Kensington Palace and ExCel Center, London, UK. What is it? AI Summit is all about AI and business - it's as much for business leaders and entrepreneurs as it is for academics and data scientists. The summit covers a lot of ground, from pharmaceuticals to finance to marketing, but the main idea is to explore the incredible ways Artificial Intelligence is being applied to a huge range of problems. Who is speaking? According to the event's website, there are more than 400 speakers at the summit. The keynote speakers include a number of impressive CEOs including Patrick Hunger, CEO of Saxo Bank and Helen Vaid, Global Chief Customer Officer of Pizza Hut. Who's it for? This machine learning conference is primarily for anyone who would like to consider themselves a thought leader. Don't let that put you off though, with a huge number of speakers from across the business world it is a great opportunity to see what the future of AI might look like. ML Conference, Munich When and where? June 18-10, 2018, Sheraton Munich Arabella Park Hotel, Munich, Germany. What is it? Munich's ML Conference is also about the applications of machine learning in the business world. But it's a little more practical-minded than AI Summit - it's more about how to actually start using machine learning from a technological standpoint. Who is speaking? Speakers at ML Conference are researchers and machine learning practitioners. Alison Lowndes from NVIDIA will be speaking, likely offering some useful insight on how NVIDIA is helping make deep learning accessible to businesses; Christian Petters, solutions architect at AWS will also be speaking on the important area of machine learning in the cloud. Who's it for? This is a good conference for anyone starting to become acquainted with machine learning. Obviously data practitioners will be the core audience here, but sysadmins and app developers starting to explore machine learning would also benefit from this sort of machine learning conference. O'Reilly AI Conference, San Francisco When and where? September 5-7 2018, Hilton Union Square, San Francisco, CA. What is it? According to O'Reilly's page for the event, this conference is being run to counter those conferences built around academic AI research. It's geared (surprise, surprise) towards the needs of businesses. Of course, there's a little bit of aggrandizing marketing spin there, but the idea is fundamentally a good one. It's all about exploring how cutting edge AI research can be used by businesses. It's somewhere between the two above - practical enough to be of interest to engineers, but with enough blue sky scope to satisfy the thought leaders. Who is speaking? O'Reilly have some great speakers here. There's someone else making an appearance for NVIDIA - Gaurav Agarwal, who's heading up the company's automated vehicles project. There's also Sarah Bird from Facebook who will likely have some interesting things to say about how her organization is planning to evolve its approach to AI over the years to come. Who is it for? This is for those working at the intersection of business and technology. Data scientists and analysts grappling with strategic business questions, CTOs and CMOs beginning to think seriously about how AI can change their organization will all find something here. O'Reilly Strata Data Conference, New York When and where? September 12-13, 2018, Javits Centre, New York, NY. What is it? O'Reilly's Strata Data Conference is slightly more Big Data focused than its AI Conference. Yes it will look at AI and deep learning, but it's going to tackle those areas from a big data perspective first and foremost. It's more established than the AI Summit (it actually started back in 2012 as Strata + Hadoop World), so there's a chance it will have a slightly more conservative vibe. That could be a good or bad thing, of course. Who is speaking? This is one of the biggest Big Data conferences on the planet, As you'd expect the speakers are from some of the biggest organizations in the world, from Cloudera to Google and AWS. There's a load of names we could pick out, but one we're most excited about is Varant Zanoyan from Airbnb who will be talking about Zipline, Airbnb's new data management platform for machine learning. Who's it for? This is a conference for anyone serious about big data. There's going to be a considerable amount of technical detail here, so you'll probably want to be well acquainted with what's happening in the big data world. ODSC Europe 2018, London When and where? September 19-22, Novotel West, London, UK. What is it? The Open Data Science Conference is very much all about the open source communities that are helping push data science, machine learning and AI forward. There's certainly a business focus, but the event is as much about collaboration and ideas. They're keen to stress how mixed the crowd is at the event. From data scientists to web developers, academics and business leaders, ODSC is all about inclusivity. It's also got a clear practical bent. Everyone will want different things from the conference, but learning is key here. Who is speaking? ODSC haven't yet listed speakers on their website, simply stating on their website "our speakers include some of the core contributors to many open source tools, libraries, and languages". This indicates the direction of the event - community driven, and all about the software behind it. Who's it for? More than any of the other machine learning conferences listed here, this is probably the one that really is for everyone. Yes, it might be a more technical than theoretical, but it's designed to bring people into projects. Speakers want to get people excited, whether they're an academic, app developer or CTO. MLConf SF, San Francisco When and where? November 14 2018, Hotel Nikko, San Francisco, CA. What is it? MLConf has a lot in common with ODSC. The focus is on community and inclusivity rather than being overtly corporate. However, it is very much geared towards cutting edge research from people working in industry and academia - this means it has a little more of a specialist angle than ODSC. Who is speaking? At the time of writing, MLConf are on the look out for speakers. If you're interested, submit an abstract - guidelines can be found here. However, the event does have Uber's Senior Data Science Manager Franzisca Bell scheduled to speak, which is sure to be an interesting discussion on the organization's current thinking and challenges with huge amounts of data at its disposal. Who's it for? This is an event for machine learning practitioners and students. Level of expertise isn't strictly an issue - an inexperienced data analyst could get a lot from this. With some key figures from the tech industry there will certainly be something for those in leadership and managerial positions too. AI Expo, Santa Clara When and where? November 28-29, 2018, Santa Clara Convention Center, Santa Clara, CA. What is it? Santa Clara's AI Expo is one of the biggest machine learning conferences. With four different streams, including AI technologies, AI and the consumer, AI in the enterprise, and Data analytics for AI and IoT, the event organizers are really trying to make their coverage pretty comprehensive. Who is speaking? The event's website boasts 75+ speakers. The most interesting include Elena Grewel, Airbnb's Head of Data Science, Matt Carroll, who leads developer relations at Google Assistant, and LinkedIn's Senior Director of Dara Science, Xin Fu. Who is it for? With so much on offer this has wide appeal. From marketers to data analysts, there's likely to be something on offer. However, with so much going on you do need to know what you want to get out of an event like this - so be clear on what AI means to you and what you want to learn. Did we miss an important machine learning conference? Are you attending any of these this year? Let us know in the comments - we'd love to hear from you.
Read more
  • 0
  • 0
  • 4430

article-image-5-javascript-machine-learning-libraries-you-need-to-know
Pravin Dhandre
08 Jun 2018
3 min read
Save for later

5 JavaScript machine learning libraries you need to know

Pravin Dhandre
08 Jun 2018
3 min read
Technologies like machine learning, predictive analytics, natural language processing and artificial intelligence are the most trending and innovative technologies of 21st century. Whether it is an enterprise software or a simple photo editing application, they all are backed and rooted in machine learning technology making them smart enough to be a friend to humans. Until now, the tools and frameworks that were capable of running machine learning were majorly developed in languages like Python, R and Java. However, recently the web ecosystem has picked up machine learning into its fold and is achieving transformation in web applications. Today in this article, we will look at the most useful and popular libraries to perform machine learning in your browser without the need of softwares, compilers, installations and GPUs. TensorFlow.js GitHub: 7.5k+ stars With the growing popularity of TensorFlow among machine learning and deep learning enthusiasts, Google recently released TensorFlowjs, the JavaScript version of TensorFlow. With this library, JavaScript developers can train and deploy their machine learning models faster in browser without much hassle. This library is speedy, tensile, scalable and a great start to practically experience the taste of machine learning. With TensorFlow.js, importing existing models and retraining pretrained model is a piece of cake. To check out examples on tensorflow.js, visit GitHub repository. ConvNetJS GitHub: 9k+ stars ConvNetJS provides neural networks implementation in JavaScript with numerous demos of neural networks available on GitHub repository. The framework has a good number of active followers who are programmers and coders. The library provides support to various neural network modules, and popular machine learning techniques like Classification and Regression. Developers who are interested in getting reinforcement learning onto the browser or in training complex convolutional networks, can visit the ConvNetJS official page. Brain.js GitHub: 8k+ stars Brain.js is another addition to the web development ecosystem that brings smart features onto the browser with just a few lines of code. Using Brain.js, one can easily create simple neural networks and can develop smart functionality in their browser applications without much of the complexity. It is already preferred by web developers for client side applications like in-browser games or placement of Ads, or for character recognition. You can checkout its GitHub repository to see a complete demonstration of approximating XOR function using brain.js. Synaptic GitHub: 6k+ stars Synaptic is a well-liked machine learning library for training recurrent neural networks as it has in-built architecture-free generalized algorithm. Few of the in-built architectures include multilayer perceptrons, LSTM networks and Hopfield networks. With Synaptic, you can develop various in-browser applications such as Paint an Image, Learn Image Filters, Self-Organizing Map or Reading from Wikipedia. Neurojs GitHub: 4k+ stars Another recently developed framework especially for reinforcement learning tasks in your browser, is neurojs. It mainly focuses on Q-learning, but can be used for any type of neural network based task whether it is for building a browser game or an autonomous driving application. Some of the exciting features this library has to offer are full-stack neural network implementation, extended support to reinforcement learning tasks, import/export of weight configurations and many more. To see the complete list of features, visit the GitHub page. How should web developers learn machine learning? NVIDIA open sources NVVL, library for machine learning training Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 7199

article-image-a-tale-of-two-tools-tableau-and-power-bi
Natasha Mathur
07 Jun 2018
11 min read
Save for later

A tale of two tools: Tableau and Power BI

Natasha Mathur
07 Jun 2018
11 min read
Business professionals are on a constant look-out for a powerful yet cost-effective BI tool to ramp up the operational efficiency within organizations. Two tools that are front-runners in the Self-Service Business Intelligence field currently are Tableau and Power BI. Both tools, although quite similar in nature, offer different features. Most experts say that the right tool depends on the size, needs and the budget of an organization, but when compared closely, one of them clearly beats the other in terms of its features. Now, instead of comparing the two based on their pros and cons, we’ll let Tableau and Power BI take over from here to argue their case covering topics like features, usability, to pricing and job opportunities. For those of you who aren’t interested in a good story, there is a summary of the key points at the end of the article comparing the two tools. [box type="shadow" align="" class="" width=""] The clock strikes 2’o'clock for a meeting on a regular Monday afternoon. Tableau, a market leader in Business Intelligence & data analytics and Power BI; another standout performer and Tableau’s opponent in the field of Business Intelligence head off for a meeting with the Vendor. The meeting where the vendor is finally expected to decide to which tool their organization should pick for their BI needs. With Power BI and Tableau joining the Vendor, the conversation starts on a light note with both tools introducing themselves to the Vendor. Tableau: Hi, I am Tableau, I make it easy for companies all around the world to see and understand their data. I provide different visualization tools, drag & drop features, metadata management, data notifications, etc, among other exciting features. Power BI: Hello, I am Power BI, I am a cloud-based analytics and Business Intelligence platform. I provide a full overview of critical data to organizations across the globe. I allow companies to easily share data by connecting the data sources and helping them create reports. I also help create scalable dashboards for visualization. The vendor nods convincingly in agreement while making notes about the two tools. Vendor: May I know what each one of you offers in terms of visualization? Tableau: Sure, I let users create 24 different types of baseline visualizations including heat maps, line charts and scatter plots. Not trying to brag, but you don’t need intense coding knowledge to develop high quality and complex visualizations with me. You can also ask me ‘what if’ questions regarding the data. I also provide unlimited data points for analysis. The vendor seems noticeably pleased with Tableau’s reply. Power BI: I allow users to create visualizations by asking questions in natural language using Cortana. Uploading data sets is quite easy with me. You can select a wide range of visualizations as blueprints. You can then insert data from the sidebar into the visualization. Tableau passes a glittery infectious smirk and throws a question towards Power BI excitedly. Tableau: Wait, what about data points? How many data points can you offer? The Vendor looks at Power BI with a straight face, waiting for a reply. Power BI: For now, I offer 3500 data points for data analysis. Vendor: Umm, okay, but, won’t the 3500 data point limit the effectiveness for the users? Tableau cuts off Power BI as it tries to answer and replies back to the vendor with a distinct sense of rush in its voice. Tableau: It will! Due to the 3500 data point limit, many visuals can't display a large amount of data, so filters are added. As the data gets filtered automatically, it leads to outliers getting missed. Power BI looks visibly irritated after Tableau’s response and looks at the vendor for slight hope, while vendor seems more inclined towards Tableau. Vendor: Okay. Noted. What can you tell me about your compatibility with data sources? Tableau: I support hundreds of data connectors. This includes online analytical processing (OLAP), big data options (such as NoSQL, Hadoop) as well as cloud options. I am capable of automatically determining the relationship between data when added from multiple sources. I also let you modify data links or create them manually based on your company’s preferences. Power BI: I help connect to users’ external sources including SAP HANA, JSON, MySQL, and more. When data is added from multiple sources, I can automatically determine the relationships between them. In fact, I let users connect to Microsoft Azure databases, third-party databases, files and online services like Salesforce and Google Analytics. Vendor: Okay, that’s great! Can you tell me what your customer support is like? Tableau jumps in to answer the question first yet again. Tableau: I offer direct support by phone and email. Customers can also login to the customer portal to submit a support ticket. Subscriptions are provided based on three different categories namely desktop, server and online. Also, there are support resources for different subscription version of the software namely Desktop, Server, and Online. Users are free to access the support resources depending upon the version of the software. I provide getting started guides, best practices as well as how to use the platform’s top features. A user can also access Tableau community forum along with attending training events. The vendor seems highly pleased with Tableau’s answer and continues scribbling in his notebook. Power BI: I offer faster customer support to users with a paid account. However, all users can submit a support ticket. I also provide robust support resources and documentation including learning guides, a user community forum and samples of how my partners use the platform.  Though customer support functionality is limited for users with a free Power BI account. Vendor: Okay, got it! Can you tell me about your learning curves? Do you get along well with novice users too or just professionals? Tableau: I am a very powerful tool and data analysts around the world are my largest customer base. I must confess, I am not quite intuitive in nature but given the powerful visualization features that I offer, I see no harm in people getting themselves acquainted with data science a bit before they decide to choose me. In a nutshell, it can be a bit tricky to transform and clean visualizations with me for novices. Tableau looks at the vendor for approval but he is just busy making notes. Power BI: I am the citizen data scientists’ ally. From common stakeholders to data analysts, there are features for almost everyone on board as far as I am concerned. My interface is quite intuitive and depends more on drag and drop features to build visualizations. This makes it easy for the users to play around with the interface a bit. It doesn’t matter whether you’re a novice or pro, there’s space for everyone here. A green monster of jealousy takes over Tableau as it scoffs at Power BI. Tableau: You are only compatible with Windows. I, on the other hand, am compatible with both Windows and Mac OS. And let’s be real it’s tough to do even simple calculations with you, such as creating a percent-of-total variable, without learning the DAX language. As the flood of anger rises in Power BI, Vendor interrupts them. Vendor: May I just ask one last question before I get ready with the results? How heavy are you on my pockets? Power BI: I offer three subscription plans namely desktop, pro, and premium. Desktop is the free version. Pro is for professionals and starts at $9.99 per user per month. You get additional features such as data governance, content packaging, and distribution. I also offer a 60 day trial with Pro. Now, coming to Premium, it is built on a capacity pricing. What that means is that I charge you per node per month. You get even more powerful features such as premium version cost calculator for custom quote ranges. This is based on the number of pro, frequent and occasional users that are active on an account’s premium version. The vendor seems a little dazed as he continues making notes. Tableau: I offer three subscriptions as well, namely Desktop, Server, and Online. Prices are charged per user per month but billed annually. Desktop category comes with two options: Personal edition (starting at $35) and professional edition (starting at $70). The server option offers on-premises or public cloud capabilities, starting at $35 while the Online version is fully hosted and starts at $42. I also offer a free version namely Tableau Public with which users can create visualizations, save them and share them on social media or their blog. There is a 10GB storage limit though. I also offer 14 days free trial for users so that they can get a demo before the purchase. Tableau and Power BI both await anxiously for the Vendor’s reply as he continued scribbling in his notebook while making weird quizzical expressions. Vendor: Thank you so much for attending this meeting. I’ll be right back with the results. I just need to check on a few things. Tableau and power BI watch the vendor leave the room and heavy anticipation fills the room. Tableau: Let’s be real, I will always be the preferred choice for data visualization. Power BI: We shall see that. Don’t forget that I also offer data visualization tools along with predictive modeling and reporting. Tableau: I have a better job market! Power BI: What makes you say that? I think you need to re-check the Gartner’s Magic Quadrant as I am right beside you on that. Power BI looks at Tableau with a hot rush of astonishment as the Vendor enters the room. The vendor smiles at Tableau as he continues the discussion which makes Power BI slightly uneasy. Vendor: Tableau and Power BI, you both offer great features but as you know I can only pick one of you as my choice for the organizations. An air of suspense surrounds the atmosphere. Vendor: Tableau, you are a great data visualization tool with versatile built-in features such as user interface layout, visualization sharing, and intuitive data exploration. Power BI, you offer real-time data access along with some pretty handy drag and drop features. You help create visualizations quickly and provide even the novice users an access to powerful data analytics without any prior knowledge. The tension notched up even more as the Vendor kept talking. Vendor: Tableau! You’re a great data visualization tool but the price point is quite high. This is one of the reasons why I choose Microsoft Power BI. Microsoft Power BI offers data visualization, connects to external data sources, lets you create reports, etc, all at low cost. Hence, Power BI, welcome aboard! A sense of infinite peace and pride emanates from Power BI. The meeting ends with Power BI and Vendor shaking hands as Tableau silently leaves the room. [/box] We took a peek into the Vendor’s notebook and saw this comparison table. Power BI Tableau Visualization capabilities Good Very Good Compatibility with multiple Data sources Good Good Customer Support Quality Good Good Learning Curve Very Good Good System Compatibility Windows Windows & Mac OS Cost Low Very high Job Market Good Good Analytics Very Good Good Both the Business Intelligence tools are in demand by organizations all over the world. Tableau is fast and agile. It provides a comprehensible interface along with visual analytics where users have the ability to ask and answer questions. Its versatility and success stories make it a good choice for organizations willing to invest in a higher budget Business Intelligence software. Power BI, on the other hand, offers almost similar features as Tableau including data visualization, predictive modeling, reporting, data prep, etc, at one of the lowest subscription prices today in the market. Nevertheless, there are upgrades being made to both of the Business Intelligence tools, and we can only wait to see what’s more to come in these technologies. Building a Microsoft Power BI Data Model “Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan“ Unlocking the secrets of Microsoft Power BI      
Read more
  • 0
  • 0
  • 5892

article-image-python-tensorflow-excel-and-more-data-professionals-reveal-their-top-tools
Amey Varangaonkar
06 Jun 2018
4 min read
Save for later

Python, Tensorflow, Excel and more - Data professionals reveal their top tools

Amey Varangaonkar
06 Jun 2018
4 min read
Data professionals are constantly on the lookout for the best tools to simplify their data science tasks - be it data acquisition, machine learning, or visualizing the results of the analysis. With so much on their plate already, having robust, efficient tools in the arsenal helps them a lot in reducing the procedural complexities. Not just that, the time taken to do these tasks is considerably reduced as well. But what tools do data professionals rely on to make their lives easier? Thanks to the Skill-up 2018 survey that we recently conducted, we have some interesting observations to share with you! Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. Key Takeaways Python is the most widely used programming language by data professionals Python finds a wide adoption across all spectrums of data science - including data analysis, machine learning, deep learning and data visualization Excel continues to be favored by the data professionals because of its effectiveness and simplicity R is slowly falling behind Python in the race to Data Science supremacy Now, let’s look at these observations, in more depth. Python continues its ascension as the top dog Python’s rise in popularity as well as adoption over the last 3 years has been quite staggering, to say the least. Python’s ease of use, powerful analytical and machine learning capabilities as well as its applications outside of data science make it quite a popular language in the tech community. It thus comes as no surprise that it stood out from the others and was the undisputed choice of language for the data pros. R, on the other hand, seems to be finding it difficult to play catch-up to Python, with less than half the number of votes - despite being the tool of choice for many statisticians and researchers. Is the paradigm shift well and truly on? Is Python edging R out for good? Source: Packt Skill-Up Survey 2018 It is interesting to see SQL as the number 2, but considering the number of people working with databases these days it doesn’t come as a surprise. Also, JavaScript is preferred more than Java, indicating the rising need for web-based dashboards for effective Business Intelligence. Data professionals still love Excel, but Python libraries are taking over Microsoft Excel has traditionally been a highly popular tool for data analysis, especially when dealing with data with hundreds and thousands of records. Excel’s perfect setting for data manipulation and charting continues to be the reason why people still use it for basic-level data analysis, as indicated by our survey. Almost 53% of the respondents prefer having Excel in their analysis toolkit for their day to day tasks. Top libraries, tools and frameworks used by data professionals (Source: Packt Skill-Up Survey 2018) The survey also indicated Python’s rising dominance in the data science domain, with 8 out of the 10 most-used tools for data analysis being Python-based. Python’s offerings for data wrangling, scientific computing, machine learning and deep learning make its libraries the obvious choice for data professionals. Here’s a quick look at  15 useful Python libraries to make the above-mentioned data science tasks easier. Tensorflow and PyTorch are in demand AI’s popularity is soaring with every passing day as it finds applications across all types of industries and business domains. In our survey, we found machine learning and deep learning to be two of the most valuable skills to have for any data scientist, as can be seen from the word cloud below: Word cloud for the most valued skills by data professionals (Source: Packt Skill-Up Survey) Python’s two popular deep learning frameworks - Tensorflow and PyTorch have thus gained a lot of attention and adoption in the recent times. Along with Keras - another Python library - these two libraries are the most used frameworks used by data scientists and ML developers for building efficient machine learning and deep learning models. Which language/libraries do you use for your everyday Data Science tasks? Do you agree with your peers’ choice of tools? Feel free to let us know! Read more Data cleaning is the worst part of data analysis, say data scientists 30 common data science terms explained Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 6325
article-image-top-languages-for-artificial-intelligence-development
Natasha Mathur
05 Jun 2018
11 min read
Save for later

Top languages for Artificial Intelligence development

Natasha Mathur
05 Jun 2018
11 min read
Artificial Intelligence is one of the hottest technologies currently. From work colleagues to your boss, chances are that most (yourself included) wish to create the next big AI project. Artificial Intelligence is a vast field and with thousands of languages to choose from, it can get a bit difficult to pick the language that will bring the most value to your project. For anyone wanting to dive in the AI space, the initial stage of choosing the right language can really decelerate the development process. Moreover, making a right choice about the language for the Artificial Intelligence development depends on your skills and needs. Following are the top 5 programming languages for Artificial Intelligence development: 1.Python Python, hands down, is the number one programming language when it comes to Artificial Intelligence development. Not only is it one of the most popular languages in the field of data science, machine learning, and Artificial Intelligence in general, it is also popular among game developers, web developers, cybersecurity professionals and others. It offers a ton of libraries and frameworks in Machine Learning and Deep Learning that are extremely powerful and essential for AI development such as TensorFlow, Theano, Keras, Scikit Learn, etc. Python is the go-to language for AI development for most people, novices and experts alike. Pros It’s quite easy to learn due to its simple syntax. This helps in implementing the AI algorithms in a quick and easy manner. Development is faster in Python as compared to Java, C++ or Ruby. It is a multi-paradigm programming language and supports object-oriented, functional and procedure-oriented programming languages. Python has a ton of libraries and tools to offer. Python libraries such as Scikit-learn, Numpy, CNTK, etc are quite trending. It is a portable language and can be used on multiple operating systems namely Windows, Mac OS, Linux, and Unix. Cons Integration of the AI systems with non-Python infrastructure. For e.g. for an infrastructure built around Java, it would be advisable to build deep learning models using Java rather than Python. If you are a data scientist, a machine learning developer or just a domain expert like a bioinformatician who hasn’t yet learned a programming language, Python is your best bet. It is easy to learn, translate equations and logic well in few lines of code and has a rich development ecosystem. 2.  C++ C++  comes second on the list when it comes to top 5 programming languages for Artificial Intelligence development. There are cases where C++ supersedes Python even though it is not the most common language when talking about AI development. For instance, when working with an embedded environment where you don’t want a lot of overhead due to Java Virtual Machine or Python Interpreter; C++ is a perfect choice. C++ also consists of some popular libraries and frameworks in AI, machine learning and deep learning namely, Mlpack, shark, OpenNN, Caffe, Dlib, etc. Pros Execution in C++ is very fast which is why it can be the go-to language when it comes to AI projects that are time-sensitive. It offers substantial use of algorithms. It uses statistical AI techniques quite effectively. Data hiding and inheritance make it possible to reuse the existing code during the development process. It is also suitable for machine learning and Neural Networks. Cons It follows a bottom-up approach and this makes it very complex for large-scale projects. If you are a game developer, you’ve already dabbled with C++ in some form or the other. Given the popularity of C++ among developers, it goes without saying, that if you choose C++, it can definitely kickstart your AI development process to build smarter, more interactive games. 3. Java Java is a close contender to C++. From Machine Learning to Natural language processing, Java comes with a plethora of libraries for all aspects of Artificial Intelligence development. Java has all the infrastructure that you need to create your next big AI project. Some popular Java libraries and frameworks are Deeplearning4j, Weka, Java-ML, etc. Pros Java follows the once Written Read/Run Anywhere (WORA) principle. It is a time-efficient language as it can be run on any platform without the need for re-compilation every time because of Virtual Machine Technology. Java works well for search algorithms, neural networks, and NLP. It is a multi-paradigm language i.e. it supports object-oriented, procedure-oriented and functional programming languages. It is easy to debug. Cons As mentioned, Java has a complex and verbose code structure which can be a bit time-consuming as it increases the development time. If you are into development of software, web, mobile or anywhere in between, you’ve worked with Java at some point, probably you still are. Most commercial apps have Java baked in them. The familiarity and robustness that Java has to offer is a good reason to pick Java when working with AI development. This is especially relevant if you want to enter well-established domains like banking that are historically built on top of Java-based systems. 4. Scala Just like Java, Scala belongs to the JVM family. Scala is a fairly new language in the AI space but it’s finding quite a bit of recognition recently in many corporations and startups. It has a lot to offer in terms of convenience which is why developers enjoy working with it. Also, ScalaNLP, DeepLearning4j, etc are all tools and libraries that make the AI development process a bit easier with Scala. Let’s have a look at the features that make it a good choice for AI development. Pros It’s good for projects that need scalability. It combines the strengths of Functional and Imperative programming models to act as a powerful tool which helps build highly concurrent applications while reaping the benefits of an OO approach at the same time. It provides good concurrency support which helps with projects involving real-time parallelized analytics. Scala has a good open source community when it comes to statistical learning, information theory and Artificial Intelligence in general. Cons Scala falls short when it comes to machine learning libraries. Scala consists of concepts such as implicits as well as type classes. These might not be familiar to programmers coming from the object-oriented world. The learning curve in Scala is steep. Even though Scala lacks in machine learning libraries, its scalability, and concurrency support makes it a good option for AI development. With more companies such as IBM and lightbend collaborating together to use Scala for building more AI applications, it’s no secret that Scala’s use for AI development is on constant demand in the present as well as for the future. 5. R R is a language that’s catching up in the race recently for AI development. Primarily used for academic research, R is written by statisticians and it provides basic data management which makes tasks really easy. It’s not as pricey as statistical software namely Matlab or SAS, which makes it a great substitute for this software and a golden child of data science. Pros R comes with plenty packages that help boost its performance. There are packages available for pre-modeling, modeling and post modeling stages in data analysis. R is very efficient in tasks such as continuous regression, model validation, and data visualization. R being a statistical language offers very robust statistical model packages for data analysis such as caret, ggplot, dplyr, lattice, etc which can help boost the AI development process. Major tasks can be done with little code developed in an interactive environment which makes it easy for the developers to try out new ideas and verify them with varied graphics functions that come with R. Cons R’s major drawback is its inconsistency due to third-party algorithms. Development speed is quite slow when it comes to R as you have to learn new ways for data modeling. You also have to make predictions every time when using a new algorithm. R is one of those skills that’s mainly demanded by recruiters in data science and machine learning. Overall, R is a very clever language. It is freely available, runs on server as well as common hardware. R can help amp up your AI development process to a great extent. Other languages worth mentioning There are three other languages that deserve a mention in this article: Go, Lisp and Prolog. Let’s have a look at what makes these a good choice for AI development. Go Go has been receiving a lot of attention recently. There might not be as many projects available in AI development using Go as for now but the language is on its path to continuous growth these days. For instance, AlphaGo, is a first computer program in Go that was able to defeat the world champion human Go player, proves how powerful the language is in terms of features that it can offer. Pros You don’t have to call out to libraries, you can make use of Go’s existing machine learning libraries. It doesn’t consist of classes. It only consists of packages which make the code cleaner and clear. It doesn’t support inheritance which makes it easy to modify the code in Go. Cons There aren’t many solid libraries for core AI development tasks. With Go, it is possible to pull off core ML and some reinforcement learning tasks as well, despite the lack of libraries. But given other versatile features of Go, the future looks bright for this language with it finding more applications in AI development. Lisp Lisp is one of the oldest languages for AI development and as such gets an honorary mention. It is a very popular language in AI academic research and is equally effective in the AI development process as well. However, it is not such a usual choice among the developers of recent times. Also, most modern libraries in machine learning, deep learning, and AI are written in popular languages such as C++, Python, etc. But I wouldn’t write off Lisp yet. It still has an immense capacity to build some really innovative AI projects, if take the time to learn it. Pros Its flexible and extendable nature enables fast prototyping, thereby, providing developers with the needed freedom to quickly test out ideas and theories. Since it was custom built for AI, its symbolic information processing capability is above par. It is suitable for machine learning and inductive learning based projects. Recompilation of functions alongside the running program is possible which saves time. Cons Since it is an old language, not a lot of developers are well-versed with it. Also, new software and hardware have to be configured to be able to accommodate using Lisp. Given the vintage nature of Lisp for the AI world, it is quite interesting to see how things work in Lisp for AI development.  The most famous example of a lisp-based AI project is DART (Dynamic Analysis and Replanning Tool), used by the U.S. military. Prolog Finally, we have Prolog, which is another old language primarily associated with AI development and symbolic computation. Pros It is a declarative language where everything is dictated by rules and facts. It supports mechanisms such as tree-based data structuring, automatic backtracking, nondeterminism and pattern matching which is helpful for AI development. This makes it quite a powerful language for AI development. Its varied features are quite helpful in creating AI projects for different fields such as medical, voice control, networking and other such Artificial development projects. It is flexible in nature and is used extensively for theorem proving, natural language processing, non-numerical programming, and AI in general. Cons High level of difficulty when it comes to learning Prolog as compared to other languages. Apart from the above-mentioned features, implementation of symbolic computation in other languages can take up to tens of pages of indigestible code. But the same algorithms implemented in Prolog results in a clear and concise program that easily fits on one page. So those are the top programming languages for Artificial Intelligence development. Choosing the right language eventually depends on the nature of your project. If you want to pick an easy to learn language go for Python but if you are working on a project where speed and performance are most critical then pick C++. If you are a creature of habit, Java is a good choice. If you are a thrill seeker who wants to learn a new and different language, choose Scala, R or Go, and if you are feeling particularly adventurous, explore the quaint old worlds of Lisp or Prolog. Why is Python so good for AI and Machine Learning? 5 Python Experts Explain Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss 15 Useful Python Libraries to make your Data Science tasks Easier
Read more
  • 0
  • 0
  • 10536

article-image-5-ways-machine-learning-is-transforming-digital-marketing
Amey Varangaonkar
04 Jun 2018
7 min read
Save for later

5 ways Machine Learning is transforming digital marketing

Amey Varangaonkar
04 Jun 2018
7 min read
The enterprise interest in Artificial Intelligence is surging. In an era of cut-throat competition where it’s either do or die, businesses have realized the transformative value of AI to gain an upper hand over their rivals. Given its direct contribution to business revenue, it comes as no surprise that marketing has become one of the major application areas of machine learning. Per Capgemini, 84% of marketing organizations are implementing Artificial Intelligence in 2018, in some capacity 3 out of the 4 organizations implementing AI techniques have managed to increase the sales of their products and services by 10% or more. In this article, we look at 5 innovative ways in which machine learning is being used to enhance digital marketing. Efficient lead generation and customer acquisition One of the major keys to drive business revenue is getting more customers on board who will buy your products or services repeatedly. Machine learning comes in handy to identify potential leads and convert those leads into customers. With the help of the pattern recognition techniques, it is possible to understand a particular lead’s behavioral and purchase trends. Through predictive analytics, it is then possible to predict if a particular lead will buy the product or not. Then, that lead is put into the marketing sales funnel to perform targeted marketing campaigns which may ultimately result into a purchase. A cautionary note here - with GDPR (General Data Protection Regulation) in place across the EU (European Union), there are restrictions in the manner AI algorithms can be used to make automated decisions based on the consumer data. This will make it imperative for the businesses to strictly follow the regulation and operate under its purview, or they could face heavy penalties. As long as businesses respect privacy and follow basic human decency such as asking for permission to use a person’s data or informing them about how their data will be used, marketers can reap the benefits of data driven marketing like never before. It all boils down to applying common sense while handling personal data, as one GDPR expert put it. But we all know how uncommon, that sense is! Customer churn prediction is now possible ‘Customer churn rate’ is a popular marketing term referring to the number of customers who opt out of a particular service offered by the company over a given time period. The churn time is calculated based on the customer’s last interaction with the service or the website. It is crucial to track the churn rate as it is a clear indicator of the progress - or the lack of it - that a business is making. Predicting the customer churn rate is difficult - especially for e-commerce businesses selling a product - but it is not impossible thanks to machine learning. By understanding the historical data and the user’s past website usage patterns, these techniques can help a business identify the customers who are most likely to churn out soon and when that is expected to happen. Appropriate measures can then be taken to retain such customers - by giving special offers and discounts, timely follow-up emails, and so on - without any human intervention. American entertainment giants Netflix make perfect use of churn prediction to keep the churn rate at just 9%, lower than any of the subscription streaming services out there today. Not just that, they also manage to market their services to drive more customer subscriptions. Dynamic pricing made easy In today’s competitive world, products need to be priced optimally. It has become imperative that companies define an extremely competitive and relevant pricing for their products, or else the customers might not buy them. On top of this, there are fluctuations in the demand and supply of the product, which can affect the product’s pricing strategy. With the use of machine learning algorithms, it is now possible to forecast the price elasticity by considering various factors such as the channel on which the product is sold. Other  factors taken into consideration could be the sales period, the product’s positioning strategy or the customer demand. For example, eCommerce giants Amazon and eBay tweak their product prices on a daily basis. Their pricing algorithms take into account factors such as the product’s popularity among the customers, maximum discount that can be offered, and how often the customer has purchased from the website. This strategy of dynamic pricing is now being adopted by almost all the big retail companies even in their physical stores. There are specialized software available which are able to leverage machine learning techniques to set dynamic prices to the products. Competera is one such pricing platform which transforms retail through ongoing, timely, and error-free pricing for category revenue growth and improvements in customer loyalty tiers. To know more about how dynamic pricing actually works, check out this Competitoor article. Customer segmentation and radical personalization Every individual is different, and has unique preferences, likes and dislikes. With machine learning, marketers can segment users into different buyer groups based on a variety of factors such as their product preferences, social media activities, their Google search history and much more. For instance, there are machine learning techniques that can segment users based on who loves to blog about food, or loves to travel, or even which show they are most likely to watch on Netflix! The website can then recommend or market products to these customers accordingly. Affinio is one such platform used for segmenting customers based on their interests. Content and campaign personalization is another widely-recognized use-case of machine learning for marketing. Machine learning algorithms are used to build recommendation systems that take into consideration the user’s online behavior and website usage to analyse and recommend products that he/she is likely to buy. A prime example of this is Google’s remarketing strategy, which tries to reconnect with the customers who leave the website without buying anything by showing them relevant ads across different devices. The best part about recommendation systems is that they are able to recommend two completely different products to two customers with a different usage pattern. Incorporating them within the website has turned out to be a valuable strategy to increase the customer’s loyalty and the overall lifetime value. Improving customer experience Gone are the days when the customer who visited a website had to use the ‘Contact Me’ form in case of any query, and an executive would get back with the answer. These days, chatbots are integrated in almost every ecommerce website to answer ad-hoc customer queries, and even suggest them products that fit their criteria. There are live-chat features included in these chatbots as well, which allow the customers to interact with the chatbots and understand the product features before they buy any product. For example, IBM Watson has a really cool feature called the Tone Analyzer. It parses the feedback given by the customer and identifies the tone of the feedback - if it’s angry, resentful, disappointed, or happy. It is then possible to take appropriate measures to ensure that the disgruntled customer is satisfied, or to appreciate the customer’s positive feedback - whatever may be the case. Marketing will only get better with machine learning Highly accurate machine learning algorithms, better processing capabilities and cloud-based solutions are now making it possible for companies to get the most out of AI for their marketing needs. Many companies have already adopted machine learning to boost their marketing strategy, with major players such as Google and Facebook already leading the way. Safe to say many more companies - especially small and medium-sized businesses - are expected to follow suit in the near future. Read more How machine learning as a service is transforming cloud Microsoft Open Sources ML.NET, a cross-platform machine learning framework Active Learning : An approach to training machine learning models efficiently
Read more
  • 0
  • 30
  • 4160

article-image-deploy-self-service-business-intelligence-qlik-sense
Amey Varangaonkar
31 May 2018
7 min read
Save for later

Best practices for deploying self-service BI with Qlik Sense

Amey Varangaonkar
31 May 2018
7 min read
As part of a successful deployment of Qlik Sense, it is important IT recognizes self-service Business Intelligence to have its own dynamics and adoption rules. The various use cases and subsequent user groups thus need to be assessed and captured. Governance should always be present but power users should never get the feeling that they are restricted. Once they are won over, the rest of the traction and the adoption of other user types is very easy. In this article, we will look at the most important points to keep in mind while deploying self-service with Qlik Sense. The following excerpt is taken from the book Mastering Qlik Sense, authored by Martin Mahler and Juan Ignacio Vitantonio. This book demonstrates useful techniques to design useful and highly profitable Business Intelligence solutions using Qlik Sense. Here's the list of points to be kept in mind: Qlik Sense is not QlikView Not even nearly. The biggest challenge and fallacy is that the organization was sold, by Qlik or someone else, just the next version of the tool. It did not help at all that Qlik itself was working for years on Qlik Sense under the initial product name Qlik.Next. Whatever you are being told, however, it is being sold to you, Qlik Sense is at best the cousin of QlikView. Same family, but no blood relation. Thinking otherwise sets the wrong expectation so the business gives the wrong message to stakeholders and does not raise awareness to IT that self-service BI cannot be deployed in the same fashion as guided analytics, QlikView in this case. Disappointment is imminent when stakeholders realize Qlik Sense cannot replicate their QlikView dashboards. Simply installing Qlik Sense does not create a self-service BI environment Installing Qlik Sense and giving users access to the tool is a start but there is more to it than simply installing it. The infrastructure requires design and planning, data quality processing, data collection, and determining who intends to use the platform to consume what type of data. If data is not available and accessible to the user, data analytics serve no purpose. Make sure a data warehouse or similar is in place and the business has a use case for self-service data analytics. A good indicator for this is when the business or project works with a lot of data, and there are business users who have lots of Excel spreadsheets lying around analyzing it in different ways. That’s your best case candidate for Qlik Sense. IT to monitor Qlik Sense environment rather control IT needs to unlearn to learn new things and the same applies when it comes to deploying self-service. Create a framework with guidelines and principles and monitor that users are following it, rather than limiting them in their capabilities. This framework needs to have the input of the users as well and to be elastic. Also, not many IT professionals agree with giving away too much power to the user in the development process, believing this leads to chaos and anarchy. While the risk is there, this fear needs to be overcome. Users love data analytics, and they are keen to get the help of IT to create the most valuable dashboard possible and ensure it will be well received by a wide audience. Identifying key users and user groups is crucial For a strong adoption of the tool, IT needs to prepare the environment and identify the key power users in the organization and to win them over to using the technology. It is important they are intensively supported, especially in the beginning, and they are allowed to drive how the technology should be used rather than having principles imposed on them. Governance should always be present but power users should never get the feeling they are restricted by it. Because once they are won over, the rest of the traction and the adoption of other user types is very easy. Qlik Sense sells well–do a lot of demos Data analytics, compelling visualizations, and the interactivity of Qlik Sense is something almost everyone is interested in. The business wants to see its own data aggregated and distilled in a cool and glossy dashboard. Utilize the momentum and do as many demos as you can to win advocates of the technology and promote a consciousness of becoming a data-driven culture in the organization. Even the simplest Qlik Sense dashboards amaze people and boost their creativity for use cases where data analytics in their area could apply and create value. Promote collaboration Sharing is caring. This not only applies to insights, which naturally are shared with the excitement of having found out something new and valuable, but also to how the new insight has been derived. People keep their secrets on the approach and methodology to themselves, but this is counterproductive. It is important that applications, visualizations, and dashboards created with Qlik Sense are shared and demonstrated to other Qlik Sense users as frequently as possible. This not only promotes a data-driven culture but also encourages the collaboration of users and teams across various business functions, which would not have happened otherwise. They could either be sharing knowledge, tips, and tricks or even realizing they look at the same slices of data and could create additional value by connecting them together. Market the success of Qlik Sense within the organization If Qlik Sense has had a successful achievement in a project, tell others about it. Create a success story and propose doing demos of the dashboard and its analytics. IT has been historically very bad in promoting their work, which is counterproductive. Data analytics creates value and there is nothing embarrassing about boasting about its success; as Muhammad Ali suggested, it’s not bragging if it’s true. Introduce guidelines on design and terminology Avoiding the pitfalls of having multiple different-looking dashboards by promoting a consistent branding look across all Qlik Sense dashboards and applications, including terminology and best practices. Ensure the document is easily accessible to all users. Also, create predesigned templates with some sample sheets so the users duplicate them and modify them to their liking and extend them, applying the same design. Protect less experienced users from complexities Don’t overwhelm users if they have never developed in their life. Approach less technically savvy users in a different way by providing them with sample data and sample templates, including a library of predefined visualizations, dimensions, or measures (so-called Master Key Items). Be aware that what is intuitive to Qlik professionals or power users is not necessarily intuitive to other users – be patient and appreciative of their feedback, and try to understand how a typical business user might think. For a strong adoption of the tool, IT needs to prepare the environment and identify the key power users in the organization and win them over to using the technology. It is important they are intensively supported, especially in the beginning, and they are allowed to drive how the technology should be used rather than having principles imposed on them. If you found the excerpt useful, make sure you check out the book Mastering Qlik Sense to learn more of these techniques on efficient Business Intelligence using Qlik Sense. Read more How Qlik Sense is driving self-service Business Intelligence Overview of a Qlik Sense® Application’s Life Cycle What we learned from Qlik Qonnections 2018
Read more
  • 0
  • 0
  • 24344
article-image-why-enterprises-love-the-elastic-stack
Pravin Dhandre
31 May 2018
2 min read
Save for later

Why Enterprises love the Elastic Stack

Pravin Dhandre
31 May 2018
2 min read
Business insights has always been a hotspot by companies and with data that keep flowing, growing and becoming fat by the day, analytics need to be quicker, real-time and reliable. Analytics that can’t match up today’s data provide insights that become almost lifeless to market dynamics. The question then is, is there an analytics solution that can tackle the data hydra? Elastic Stack is your answer. It is power packed with tools like Elasticsearch, Kibana, Logstash, X-Pack and Beats that takes data from any source, in any format, and provide instant search, analysis, and visualization in real time. With over 225 million downloads, it is a clear crowd favorite. Enterprises get an addon benefit in using it as a single analytical suite or getting it integrated with other products, delivering real-time actionable insights and decisions every time. Why Enterprises love the Elastic Stack? Some of the common things that enterprises love about the Elastic Stack is its being open source platform. The next thing that IT companies enjoys is its super fast distributed search mechanism that makes your queries run faster and much efficient. Apart from this, its bundling with Kibana and Logstash makes it awesome for IT infrastructure and DevOps teams who can aggregate and analyze billions of logs with ease. Its simple and robust analysis platform provides distinct advantage over Splunk, Solr, Sphinx, Ambar and many other alternative product suites. Also, its SaaS option allows customers to perform log analytics, full text search and application monitoring over the cloud with utmost ease and reasonable pricing. Companies like Amazon, Bloomberg, Ebay, SAP, Citibank, Sony, Mozilla, Wordpress, SalesForce are already been using Elastic Stack, powering their search and analytics to combat their daily business challenges. Whether it is an educational institution, travel agency, e-commerce, or a financial institution, the Elastic stack is empowering millions of companies with real-time metrics, strong analytics, better search experience and high customer satisfaction. How to install Elasticsearch in Ubuntu and Windows How to perform Numeric Metric Aggregations with Elasticsearch CRUD (Create Read, Update and Delete) Operations with Elasticsearch
Read more
  • 0
  • 0
  • 3814

article-image-self-service-business-intelligence-qlik-sense-users
Amey Varangaonkar
29 May 2018
7 min read
Save for later

Four self-service business intelligence user types in Qlik Sense

Amey Varangaonkar
29 May 2018
7 min read
With the introduction of self-service to BI, there is segmentation at various levels and breaths on how self-service is conducted and to what extent. There are, quite frankly, different user types that differ from each other in level of interest, technical expertise, and the way in which they consume data. While each user will almost be unique in the way they use self-service, the user base can be divided into four different groups. In this article, we take a look at the four types of users in self-service business intelligence model. The following excerpt is taken from the book Mastering Qlik Sense, authored by Martin Mahler and Juan Ignacio Vitantonio. This book presents expert techniques to design and deploy enterprise-grade Business Intelligence solutions for your business, by leveraging the power of Qlik Sense. Power Users or Data Champions Power users are the most tech-savvy business users, who show a great interest in self-service BI. They produce and build dashboards themselves and know how to load data and process it to create a logical data model. They tend to be self-learning and carry a hybrid set of skills, usually a mixture of business knowledge and some advanced technical skills. This user group is often frustrated with existing reporting or BI solutions and finds IT inadequate in delivering the same. As a result, especially in the past, they take away data dumps from IT solutions and create their own dashboards in Excel, using advanced skills such as VBA, Visual Basic for Applications. They generally like to participate in the development process but have been unable to do so due to governance rules and a strict old-school separation of IT from the business. Self-service BI is addressing this group in particular, and identifying those users is key in reaching adoption within an organization. Within an established self-service environment, power users generally participate in committees revolving around the technical environments and represent the business interest. They also develop the bulk of the first versions of the apps, which, as part of a naturally evolving process, are then handed over to more experienced IT for them to be polished and optimized. Power users advocate the self-service BI technology and often not only demo the insights and information they achieved to extract from their data, but also the efficiency and timeliness of doing so. At the same time, they also serve as the first point of contact for other users and consumers when it comes to questions about their apps and dashboards. Sometimes they also participate in a technical advisory capacity on whether other projects are feasible to be implemented using the same technology. Within a self-service BI environment, it is safe to say that those power users are the pillars of a successful adoption. Business Users or Data Visualizers Users are frequent users of data analytics, with the main goal to extract value from the data they are presented with. They represent the group of the user base which is interested in conducting data analysis and data discovery to better understand their business in order to make better-informed decisions. Presentation and ease of use of the application are key to this type of user group and they are less interested in building new analytics themselves. That being said, some form of creating new charts and loading data is sometimes still of interest to them, albeit on a very basic level. Timeliness, the relevance of data, and the user experience are most relevant to them. They are the ones who are slicing and dicing the data and drilling down into dimensions, and who are keen to click around in the app to obtain valuable information. Usually, a group of users belong to the same department and have a power user overseeing them with regard to questions but also in receiving feedback on how the dashboard can be improved even more. Their interaction with IT is mostly limited to requesting access and resolving unexpected technical errors. Consumers or Data Readers Consumers usually form the largest user group of a self-service BI analytics solution. They are the end recipients of the insights and data analytics that have been produced and, normally, are only interested in distilled information which is presented to them in a digested form. They are usually the kind of users who are happy with a report, either digital or in printed form, which summarizes highlights and lowlights in a few pages, requiring no interaction at all. Also, they are most sensitive to the timeliness and availability of their reports. While usually the largest audience, at the same time this user group leverages the self-service capabilities of a BI tool the least. This poses a licensing challenge, as those users don’t take full advantage of the functionality on offer, but are costing the full amount in order to access the reports. It is therefore not uncommon to assign this type of user group a bucket of login access passes or not give them access to the self-service BI platform at all and give them the information they need in (digitally) printed format or within presentations, prepared by users. IT or Data Overseers IT represents the technical user group within this context, who sit in the background and develop and manage the framework within which the self-service BI solution operates. They are the backbone of the deployment and ensure the environment is set up correctly to cater for the various use cases required by the above-described user groups. At the same time, they ensure a security policy is in place and maintained and they introduce a governance framework for deployment, data quality, and best practices. They are in effect responsible for overseeing the power users and helping them with technical questions, but at the same time ensuring terms and definition as well as the look and feel is consistent and maintained across all apps. With self-service BI, IT plays a lesser role in actually developing the dashboards but assumes a more mentoring position, where training, consultation, and advisory in best practices are conducted. While working closely with power users, IT also provides technical support to users and liaises with the IT infrastructure to ensure the server infrastructure is fit for purpose and up and running to serve the users. This also includes upgrading the platform where required and enriching it with additional functionality if and when available. Bringing them together The previous four groups can be distinguished within a typical enterprise environment; however, this is not to say hybrid or fewer user groups are not viable models for self-service BI. It is an evolutionary process in how an organization adapts self-service data analytics with a lot of dependencies on available skills, competing established solutions, culture, and appetite on new technologies. It usually begins with IT being the first users in a newly deployed self-service environment, not only setting up the infrastructure but also developing the first apps for a couple of consumers. Power users then follow up; generally, they are the business sponsors themselves who are often big fans of data analytics, modifying the app to their liking and promoting it to their users. The user base emerges with the success of the solution, where analytics are integrated into their business as the usual process. The last group, the consumers, is mostly the last type of user group that is established, which more often than not doesn’t have actual access to the platform itself, but rather receives printouts, email summaries with screenshots, or PowerPoint presentations. Due to licensing cost and the size of the consumer audience, it is not always easy to give them access to the self-service platform; hence, most of the time, an automated and streamlined PDF printing process is the most elegant solution to cater to this type of user group. At the same time, the size of the deployment also determines the number of various user groups. In small enterprise environments, it will be mostly power users and IT who will be using self-service. This greatly simplifies the approach as well as the setup considerations. If you found the above excerpt useful, make sure you check out the book Mastering Qlik Sense to learn helpful tips and tricks to perform effective Business Intelligence using Qlik Sense. Read more: How Qlik Sense is driving self-service Business Intelligence What we learned from Qlik Qonnections 2018 How self-service analytics is changing modern-day businesses
Read more
  • 0
  • 0
  • 23629