Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides

852 Articles
article-image-why-is-hadoop-dying
Aaron Lazar
23 Apr 2018
5 min read
Save for later

Why is Hadoop dying?

Aaron Lazar
23 Apr 2018
5 min read
Hadoop has been the definitive big data platform for some time. The name has practically been synonymous with the field. But while its ascent followed the trajectory of what was referred to as the 'big data revolution', Hadoop now seems to be in danger. The question is everywhere - is Hadoop dying out? And if it is, why is it? Is it because big data is no longer the buzzword it once was, or are there simply other ways of working with big data that have become more useful? Hadoop was essential to the growth of big data When Hadoop was open sourced in 2007, it opened the door to big data. It brought compute to data, as against bringing data to compute. Organisations had the opportunity to scale their data without having to worry too much about the cost. It obviously had initial hiccups with security, the complexity of querying and querying speeds, but all that was taken care off, in the long run. Still, although querying speeds remained quite a pain, however that wasn’t the real reason behind Hadoop dying (slowly). As cloud grew, Hadoop started falling One of the main reasons behind Hadoop's decline in popularity was the growth of cloud. There cloud vendor market was pretty crowded, and each of them provided their own big data processing services. These services all basically did what Hadoop was doing. But they also did it in an even more efficient and hassle-free way. Customers didn't have to think about administration, security or maintenance in the way they had to with Hadoop. One person’s big data is another person’s small data Well, this is clearly a fact. Several organisations that used big data technologies without really gauging the amount of data they actually would need to process, have suffered. Imagine sitting with 10TB Hadoop clusters when you don’t have that much data. The two biggest organisations that built products on Hadoop, Hortonworks and Cloudera, saw a decline in revenue in 2015, owing to their massive use of Hadoop. Customers weren’t pleased with nature of Hadoop’s limitations. Apache Hadoop v Apache Spark Hadoop processing is way behind in terms of processing speed. In 2014 Spark took the world by storm. I’m going to let you guess which line in the graph above might be Hadoop, and which might be Spark. Spark was a general purpose, easy to use platform that was built after studying the pitfalls of Hadoop. Spark was not bound to just the HDFS (Hadoop Distributed File System) which meant that it could leverage storage systems like Cassandra and MongoDB as well. Spark 2.3 was also able to run on Kubernetes; a big leap for containerized big data processing in the cloud. Spark also brings along GraphX, which allows developers to view data in the form of graphs. Some of the major areas Spark wins are Iterative Algorithms in Machine Learning, Interactive Data Mining and Data Processing, Stream processing, Sensor data processing, etc. Machine Learning in Hadoop is not straightforward Unlike MLlib in Spark, Machine Learning is not possible in Hadoop unless tied with a 3rd party library. Mahout used to be quite popular for doing ML on Hadoop, but its adoption has gone down in the past few years. Tools like RHadoop, a collection of 3 R packages, have grown for ML, but it still is nowhere comparable to the power of the modern day MLaaS offerings from cloud providers. All the more reason to move away from Hadoop, right? Maybe. Hadoop is not only Hadoop The general misconception is that Hadoop is quickly going to be extinct. On the contrary, the Hadoop family consists of YARN, HDFS, MapReduce, Hive, Hbase, Spark, Kudu, Impala, and 20 other products. While e folks may be moving away from Hadoop as their choice for big data processing, they will still be using Hadoop in some form or the other. As with Cloudera and Hortonworks, though the market has seen a downward trend, they’re in no way letting go of Hadoop anytime soon, although they have shifted part of their processing operations to Spark. Is Hadoop dying? Perhaps not... In the long run, it’s not completely accurate to say that Hadoop is dying. December last year brought with it Hadoop 3.0, which is supposed to be a much improved version of the framework. Some of the most noteworthy features are its improved shell script, more powerful YARN, improved fault tolerance with erasure coding, and many more. Although, that hasn’t caused any major spike in adoption, there are still users who will adopt Hadoop based on their use case, or simply use another alternative like Spark along with another framework from the Hadoop family. So, Hadoop’s not going away anytime soon. Read More Pandas is an effective tool to explore and analyze data - Interview insights  
Read more
  • 0
  • 1
  • 14647

article-image-tensorflow-always-tops-machine-learning-artificial-intelligence-tool-surveys
Sunith Shetty
23 Aug 2018
9 min read
Save for later

Why TensorFlow always tops machine learning and artificial intelligence tool surveys

Sunith Shetty
23 Aug 2018
9 min read
TensorFlow is an open source machine learning framework for carrying out high-performance numerical computations. It provides excellent architecture support which allows easy deployment of computations across a variety of platforms ranging from desktops to clusters of servers, mobiles, and edge devices. Have you ever thought, why TensorFlow has become so popular in such a short span of time? What made TensorFlow so special, that we seeing a huge surge of developers and researchers opting for the TensorFlow framework? Interestingly, when it comes to artificial intelligence frameworks showdown, you will find TensorFlow emerging as a clear winner most of the time. The major credit goes to the soaring popularity and contributions across various forums such as GitHub, Stack Overflow, and Quora. The fact is, TensorFlow is being used in over 6000 open source repositories showing their roots in many real-world research and applications. How TensorFlow came to be The library was developed by a group of researchers and engineers from the Google Brain team within Google AI organization. They wanted a library that provides strong support for machine learning and deep learning and advanced numerical computations across different scientific domains. Since the time Google open sourced its machine learning framework in 2015, TensorFlow has grown in popularity with more than 1500 projects mentions on GitHub. The constant updates made to the TensorFlow ecosystem is the real cherry on the cake. This has ensured all the new challenges developers and researchers face are addressed, thus easing the complex computations and providing newer features, promises, and performance improvements with the support of high-level APIs. By open sourcing the library, the Google research team have received all the benefits from a huge set of contributors outside their existing core team. Their idea was to make TensorFlow popular by open sourcing it, thus making sure all new research ideas are implemented in TensorFlow first allowing Google to productize those ideas. Read Also: 6 reasons why Google open sourced TensorFlow What makes TensorFlow different from the rest? With more and more research and real-life use cases going mainstream, we can see a big trend among programmers, and developers flocking towards the tool called TensorFlow. The popularity for TensorFlow is quite evident, with big names adopting TensorFlow for carrying out artificial intelligence tasks. Many popular companies such as NVIDIA, Twitter, Snapchat, Uber and more are using TensorFlow for all their major operations and research areas. On one hand, someone can make a case that TensorFlow’s popularity is based on its origins/legacy. TensorFlow being developed under the house of “Google” enjoys the reputation of the household name. There’s no doubt, TensorFlow has been better marketed than some of its competitors. Source: The Data Incubator However that’s not the full story. There are many other compelling reasons why small scale to large scale companies prefer using TensorFlow over other machine learning tools TensorFlow key functionalities TensorFlow provides an accessible and readable syntax which is essential for making these programming resources easier to use. The complex syntax is the last thing developers need to know given machine learning’s advanced nature. TensorFlow provides excellent functionalities and services when compared to other popular deep learning frameworks. These high-level operations are essential for carrying out complex parallel computations and for building advanced neural network models. TensorFlow is a low-level library which provides more flexibility. Thus you can define your own functionalities or services for your models. This is a very important parameter for researchers because it allows them to change the model based on changing user requirements. TensorFlow provides more network control. Thus allowing developers and researchers to understand how operations are implemented across the network. They can always keep track of new changes done over time. Distributed training The trend of distributed deep learning began in 2017, when Facebook released a paper showing a set of methods to reduce the training time of a convolutional neural network model. The test was done on RESNET-50 model on ImageNet dataset which took one hour to train instead of two weeks. 256 GPUs spread over 32 servers were used. This revolutionary test has open the gates for many research work which have massively reduced the experimentation time by running many tasks in parallel on multiple GPUs. Google’s distributed TensorFlow has allowed all the researchers and developers to scale out complex distributed training using in-built methods and operations that optimizes distributed deep learning among servers. . Google’s distributed TensorFlow engine which is part of the regular TensorFlow repo, works exceptionally well with the existing TensorFlow’s operations and functionalities. It has allowed exploring two of the most important distributed methods: Distribute the training time of a neural network model over many servers to reduce the training time. Searching for good hyperparameters by running parallel experiments over multiple servers. Google has given distributed TensorFlow engine the required power to steal the share of the market acquired by other distributed projects such as Microsoft’s CNTK, AMPLab's SparkNet, and CaffeOnSpark. Even though the competition is tough, Google has still managed to become more popular when compared to the other alternatives in the market. From research to production Google has, in some ways, democratized deep learning., The key reason is TensorFlow’s high-level APIs making deep learning accessible to everyone. TensorFlow provides pre-built functions and advanced operations to ease the task of building different neural network models. It provides the required infrastructure and hardware which makes them one of the leading libraries used extensively by researchers and students in the deep learning domain. In addition to research tools, TensorFlow extends the services by bringing the model in production using TensorFlow Serving. It is specifically designed for production environments, which provides a flexible, high-performance serving system for machine learning models. It provides all the functionalities and operations which makes it easy to deploy new algorithms and experiments as per changing requirements and preferences. It provides an excellent feature of out-of-the-box integration with TensorFlow models which can be easily extended to serve other types of models and data. TensorFlow’s API is a complete package which is easier to use and read, plus provides helpful operators, debugging and monitoring tools, and deployment features. This has led to growing use of TensorFlow library as a complete package within the ecosystem by the emerging body of students, researchers, developers, production engineers from various fields who are gravitating towards artificial intelligence. There is a TensorFlow for web, mobile, edge, embedded and more TensorFlow provides a range of services and modules within their existing ecosystem making them as one of the ground-breaking end-to-end tools to provide state-of-the-art deep learning. TensorFlow.js for machine learning on the web JavaScript library for training and deploying machine learning models in the browser. This library provides flexible and intuitive APIs to build and train new and pre-existing models from scratch right in the browser or under Node.js. TensorFlow Lite for mobile and embedded ML It is a TensorFlow lightweight solution used for mobile and embedded devices. It is fast since it enables on-device machine learning inference with low latency. It supports hardware acceleration with the Android Neural Networks API. The future releases of TensorFlow Lite will bring more built-in operators, performance improvements, and will support more models to simplify the developer’s experience of bringing machine learning services within mobile devices. TensorFlow Hub for reusable machine learning A library which is used extensively to reuse machine learning models. Thus you can transfer learning by reusing parts of machine learning models. TensorBoard for visual debugging While training a complex neural network model, the computations you use in TensorFlow can be very confusing. TensorBoard makes it very easy to understand and debug your TensorFlow programs in the form of visualizations. It allows you to easily inspect and understand your TensorFlow runs and graphs. Sonnet Sonnet is a DeepMind library which is built on top of TensorFlow extensively used to build complex neural network models. All of this factors have made the TensorFlow library immensely appealing for building a wide spectrum of machine learning and deep learning projects. This tool has become a preferred choice for everyone from space research giant NASA and other confidential government agencies, to an impressive roster of private sector giants. Road Ahead for TensorFlow TensorFlow no doubt is better marketed compared to the other deep learning frameworks. The community appears to be moving very fast. In any given hour, there are approximately 10 people around the world contributing or improving the TensorFlow project on GitHub. TensorFlow dominates the field with the largest active community. It will be interesting to see what new advances TensorFlow and other utilities make possible for the future of our digital world. Continuing the recent trend of rapid updates, the TensorFlow team is making sure they address all the current and active challenges faced by the contributors and the developers while building machine learning and deep learning models. TensorFlow 2.0 will be a major update, we can expect the release candidate by next year early March. The preview version of this major milestone is expected to hit later this year. The major focus will be on ease of use, additional support for more platforms and languages, and eager execution will be the central feature of TensorFlow 2.0. This breakthrough version will add more functionalities and operations to handle current research areas such as reinforcement learning, GANs, building advanced neural network models more efficiently. Google will continue to invest and upgrade their existing TensorFlow ecosystem. According to Google’s CEO, Sundar Pichai “artificial intelligence is more important than electricity or fire.” TensorFlow is the solution they have come up with to bring artificial intelligence into reality and provide a stepping stone to revolutionize humankind. Read more The 5 biggest announcements from TensorFlow Developer Summit 2018 The Deep Learning Framework Showdown: TensorFlow vs CNTK Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence
Read more
  • 0
  • 0
  • 14571

article-image-what-makes-functional-programming-a-viable-choice-for-artificial-intelligence-projects
Prasad Ramesh
14 Sep 2018
7 min read
Save for later

What makes functional programming a viable choice for artificial intelligence projects?

Prasad Ramesh
14 Sep 2018
7 min read
The most common programming languages currently used for AI and machine learning development are Python, R, Scala, Go, among others with the latest addition being Julia. Functional languages as old as Lisp and Haskell were used to implement machine learning algorithms decades ago when AI was an obscure research area of interest. There wasn’t enough hardware and software advancements back them for implementations. Some commonalities in all of the above language options are that they are simple to understand and promote clarity. They use fewer lines of code and lend themselves well to the functional programming paradigm. What is Functional programming? Functional programming is a programming approach that uses logical functions or procedures within its programming structure. It means that the programming is done with expressions instead of statements. In a functional programming (FP) approach, computations are treated as evaluations of mathematical functions and it mostly deals with immutable data. In other words, the state of the program does not change, the functions or procedures are fixed and the output value of a function depends solely on the arguments passed to it. Let’s look at the characteristics of a functional programming approach before we see why they are well suited for developing artificial intelligence programs. Functional programming features Before we see functional programming in a machine learning context, let’s look at some of its characteristics. Immutable: If a variable x is declared and used in the program, the value of the variable is never changed later anywhere in the program. Each time the variable x is called, it will return the same value assigned originally. This makes it pretty straightforward, eliminating the need to think of state change throughout the program. Referential transparency: This means that an expression or computation always results in the same value in any part/context of the program. A referentially transparent programming language’s programs can be manipulated as algebraic equations. Lazy evaluation: Being referentially transparent, the computations yield the same result irrespective of when they are performed. This enables to postpone the computation of values until they are required/called. This means one could evaluate them lazily. Lazy evaluation helps avoids unnecessary computations and saves memory. Parallel programming: Since there is no state change due to immutable variables, the functions in a functional program can work in parallel as instructions. Parallel loops can be easily expressed with good reusability. Higher-order functions: A higher order function can take one or more functions as arguments. They may also be able to return a function as their result. Higher-order functions are useful for refactoring code and to reduce repetition. The map function found in many programming languages is an example of a higher-order function. What kind of programming is good for AI development? Machine learning is a sub-domain of artificial intelligence which deals with concepts of making predictions from data, take actions without being explicitly programmed, recommendation systems and so on. Any programming approach that focuses on logic and mathematical functions is good for artificial intelligence (AI). Once the data is collected and prepared it is time to build your machine learning model.. This typically entails choosing a model, then training and testing the model with the data. Once the desired accuracy/results are achieved, then the model is deployed. Training on the data requires data to be consistent and the code to be able to communicate directly with the data without much abstraction for least unexpected errors. For AI programs to work well, the language needs to have a low level implementation for faster communication with the processor. This is why many machine learning libraries are created in C++ to achieve fast performance. OOP with its mutable objects and object creation is better suited for high-level production software development, not very useful in AI programs which works with algorithms and data. As AI is heavily based on math, languages like Python and R are widely used languages in AI currently. R lies more towards statistical data analysis but does support machine learning and neural network packages. Python being faster for mathematical computations and with support for numerical packages is used more commonly in machine learning and artificial intelligence. Why is functional programming good for artificial intelligence? There are some benefits of functional programming that make it suitable for AI. It is closely aligned to mathematical thinking, and the expressions are in a format close to mathematical definitions. There are few or no side-effects of using a functional approach to coding, one function does not influence the other unless explicitly passed. This proves to be great for concurrency, parallelization and even debugging. Less code and more consistency The functional approach uses fewer lines of code, without sacrificing clarity. More time is spent in thinking out the functions than actually writing the lines of code. But the end result is more productivity with the created functions and easier maintenance since there are fewer lines of code. AI programs consist of lots of matrix multiplications. Functional programming is good at this just like GPUs. You work with datasets in AI with some algorithms to make changes in the data to get modified data. A function on a value to get a new value is similar to what functional programming does. It is important for the variables/data to remain the same when working through a program. Different algorithms may need to be run on the same data and the values need to be the same. Immutability is well-suited for that kind of job. Simple approach, fast computations The characteristics/features of functional programming make it a good approach to be used in artificial intelligence applications. AI can do without objects and classes of an object oriented programming (OOP) approach, it needs fast computations and expects the variables to be the same after computations so that the operations made on the data set are consistent. Some of the popular functional programming languages are R, Lisp, and Haskell. The latter two are pretty old languages and are not used very commonly. Python can be used as both, functional and object oriented. Currently, Python is the language most commonly used for AI and machine learning because of its simplicity and available libraries. Especially the scikit-learn library provides support for a lot of AI-related projects. FP is fault tolerant and important for AI Functional programming features make programs fault tolerant and fast for critical computations and rapid decision making. As of now, there may not be many such applications but think of the future, systems for self-driving cars, security, and defense systems. Any fault in such systems would have serious effects. Immutability makes the system more reliable, lazy evaluation helps conserve memory, parallel programming makes the system faster. The ability to pass a function as an argument saves a lot of time and enables more functionality. These features of functional programming make it a fitting choice for artificial intelligence. To further understand why use functional programming for machine learning, read the case made for using the functional programming language Haskell for AI in the Haskell Blog. Why functional programming in Python matters: Interview with best selling author, Steven Lott Grain: A new functional programming language that compiles to Webassembly 8 ways Artificial Intelligence can improve DevOps
Read more
  • 0
  • 0
  • 14155

article-image-top-14-cryptocurrency-trading-bots
Guest Contributor
21 Jun 2018
9 min read
Save for later

Top 14 Cryptocurrency Trading Bots - and one to forget

Guest Contributor
21 Jun 2018
9 min read
Men in rags became millionaires and rich people bite the dust within minutes, thanks to crypto currencies. According to a research, over 1500 crypto currencies are being traded globally and with over 6 million wallets, proving that digital currency is here not just to stay but to rule. The rise and fall of crypto market isn’t hidden from anyone but the catch is—cryptocurrency still sells like a hot cake. According to Bill Gates, “The future of money is digital currency”. With thousands of digital currencies rolling globally, crypto traders are immensely occupied and this is where cryptocurrency trading bots come into play. They ease out the currency trade and research process that results in spending less effort and earning more money not to mention the hours saved. According to Eric Schmidt, ex CEO of Google, “Bitcoin is a remarkable cryptographic achievement and the ability to create something that is not duplicable in the digital world has enormous value.” The crucial part is - whether the crypto trading bot is dependable and efficient enough to deliver optimum results within crunch time. To make sure you don't miss an opportunity to chip in cash in your digital wallet, here are the top 15 crypto trading bots ranked according to the performance: 1- Gunbot Gunbot is a crypto trading bot that boasts of detailed settings and is fit for beginners as well as professionals. Along with making custom strategies, it comes with a“Reversal Trading” feature. It enables continuous trading and works with almost all the exchanges (Binance, Bittrex, GDAX, Poloniex, etc). Gunbot is backed by thousands of users that eventually created an engaging and helpful community. While Gunbot offers different packages with price tags of 0.02 to 0.15 BTC, you can always upgrade them. The bot comes with a lifetime license and is constantly upgraded. Haasbot Hassonline created this cryptocurrency trading bot in January 2014. Its algorithm is very popular among cryptocurrency geeks. It can trade over 500 altcoins and bitcoins on famous exchanges such as BTCC, Kraken, Bitfinex, Huobi, Poloniex, etc. You need to put a little input of the currency and the bot will do all the trading work for you. Haasbot is customizable and has various technical indicator tools. The cryptocurrency trading bot also recognizes candlestick patterns. This immensely popular trading bot is priced between 0.12BTC and 0.32 BTC for three months. 3- Gekko Gekko is a cryptocurrency trading bot that supports over 18 Bitcoin exchanges including Bitstamp, Poloniex, Bitfinex, etc. This bot is a backtesting platform and is free for use. It is a full fledged open source bot that is available on the GitHub. Using this bot is easy as it comes with basic trading strategies. The webinterface of Gekko was written from scratch and it can run backtests, visualize the test results while you monitor your local data with it. Gekko updates you on the go using plugins for Telegram, IRC, email and several different platforms. The trading bot works great with all operating systems such as Windows, Linux and macOS. You can even run it on your Raspberry PI and cloud platforms. 4- CryptoTrader CyrptoTrader is a  cloud-based platform which allows users to create automated algorithmic trading programs in minutes. It is one of the most attractive crypto trading bot and you wont need to install any unknown software with this bot. A highly appreciated feature of CryptoTrader is its Strategy Marketplace where users can trade strategies. It supports major currency exchanges such as Coinbase, Bitstamp, BTCe and is supported for live trading and backtesting. The company claims its cloud based trading bots are unique as compared with the currently available bots in the market. 5- BTC Robot One of the very initial automated crypto trading bot, BTC Robot offers multiple packages for different memberships and software. It provides users with a downloadable version of Windows. The minimum robot plan is of $149. BTC Robot sets up quite easily but it is noted that its algorithms aren't great at predicting the markets. The user mileage in BTC Robot varies heavily leaving many with mediocre profits. With the trading bot’s fluctuating evaluation, the profits may go up or down drastically depending on the accuracy of algorithm. On the bright side the bot comes with a sixty day refund policy that makes it a safe buy. 6- Zenbot Another open source trading bot for bitcoin trading, Zenbot can be downloaded and its code can be modified too. This trading bot hasn't got an update in the past months but still, it is among one of the few bots that can perform high frequency trading while backing up multiple assets at a time. Zenbot is a lightweight artificially intelligent crypto trading bot and supports popular exchanges such as Kraken, GDAX, Poloniex, Gemini, Bittrex, Quadriga, etc. Surprisingly, according to the GitHub’s page, Zenbot’s version 3.5.15 bagged an ROI of 195% in just a mere period of three months. 7- 3Commas 3Commas is a famous cryptocurrency trading bot that works well with various exchanges including Bitfinex, Binance, KuCoin, Bittrex, Bitstamp, GDAX, Huiboi, Poloniex and YOBIT. As it is a web based service, you can always monitor your trading dashboard on desktop, mobile and laptop computers. The bot works 24/7 and it allows you to take-profit targets and set stop-loss, along with a social trading aspect that enables you to copy the strategies used by successful traders. ETF-Like feature allows users to analyze, create and back-test a crypto portfolio and pick from the top performing portfolios created by other people. 8- Tradewave Tradewave is a platform that enables users to develop their own cryptocurrency trading bots along with automated trading on crypto exchanges. The bot trades in the cloud and uses Python to write the code directly in the browser. With Tradewave, you don't have to worry about the downtime. The bot doesn't force you to keep your computer on 24x7 nor it glitches if not connected to the internet. Trading strategies are often shared by community members that can be used by others too. However, it currently supports very few cryptocurrency exchanges such as Bitstamp and BTC-E but more exchanges will be added in coming months. 9- Leonardo Leonardo is a cryptocurrency trading bot that supports a number of exchanges such as Bittrex, Bitfinex, Poloniex, Bitstamp, OKCoin, Huobi, etc. The team behind Leonardo is extremely active and new upgrades including plugins are in the funnel. Previously, it cost 0.5 BTC but currently, it is available for $89 with a license of single exchange. Leonardo boasts of two trading strategy bots including Ping Pong Strategy and Margin Maker Strategy. The first strategy enables users to set the buy and sell price leaving all of the other plans to the bot while the Margin Maker strategy can buy and sell on price adjusted according to the direction in the market. This trading bot stands out in terms of GUI. 10- USI Tech USI Tech is a trading bot that is majorly used for forex trading but it also offers BTC packages. While majority of trading bots require an initial setup and installation, USI uses a different approach and it isn't controlled by the users. Users are needed to buy-in from their expert mining and bitcoin trade connections and then, the USI Tech bot guarantees a daily profit from the transactions and trade. To earn one percent of the capital daily, customers are advised to choose feature rich plans.. 11- Cryptohopper Cryptohopper  is a 24/7 cloud based trading bot that means it doesn't matter  if you are on the computer or not. Its system enables users to trade on technical indicators with subscription to a signaler who sends buy signals. According to the Cryptohopper’s website, it is the first crypto trading bot that is integrated with professional external signals. The bot helps in leveraging bull markets and has a latest dashboard area where users can monitor and configure everything. The dashboard also includes a configuration wizard for the major exchanges including Bittrex, GDAX, Kraken,etc. 12- My Bitcoin Bot MBB is a team effort from Brad Sheridon and his proficient teammates who are experts of cryptocurrency investment. My Bitcoin Bot is an automated trading software that can be accessed by anyone who is ready to pay for it. While the monthly plan is of $39 a month, the yearly subscription for this auto-trader bot is available for of $297. My bitcoin bot comes with heaps of advantages such as unlimited technical support, free software updates, access to trusted brokers list, etc. 13- Crypto Arbitrager A standalone application that operates on a dedicated server, Crypto Arbitrager can leverage robots even when the PC is off. The developers behind this cryptocurrency trading bot claim that this software uses code integration of financial time series. Users can make money from the difference in rates of Litecoins and Bitcoins. By implementing the advanced strategy of hedge funds, the trading bot effectively manages savings of users regardless of the state of the cryptocurrency market. 14- Crypto Robot 365 Crypto Robot 365 automatically trades your digital currency. It buys and sells popular cryptocurrencies such as Ripple, Bitcoin, ethereum, Litecoin, Monero, etc. Rather than a signup fee, this platform charges its commision on a per trade basis. The platform is FCA-Regulated and offers a realistic achievable win ratio. According to the trading needs, users can tweak the system. Moreover, it has an established trading history and  it even offers risk management options. Down The Line While cryptocurrency trading is not a piece of cake, trading with currency bots may be confusing for many. The aforementioned trading bots are used by many and each is backed by years of extensive hard work. With reliability, trustworthiness, smartwork and proactiveness being top reasons for choosing any cryptocurrency trading bot, picking up a trading bot is a hefty task. I recommend you experiment with small amount of money first and if your fate gets to a shining start, pick the trading bot that perfectly suits your way of making money via cryptocurrency. About the Author Rameez Ramzan is a Senior Digital Marketing Executive of Cubix - mobile app development company.  He specializes in link building, content marketing, and site audits to help sites perform better. He is a tech geek and loves to dwell on tech news. Crypto-ML, a machine learning powered cryptocurrency platform Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Apple changes app store guidelines on cryptocurrency mining
Read more
  • 0
  • 1
  • 14022

article-image-how-to-build-a-location-based-augmented-reality-app
Guest Contributor
22 Nov 2018
7 min read
Save for later

How to build a location-based augmented reality app

Guest Contributor
22 Nov 2018
7 min read
The augmented reality market is developing rapidly. Today, it has a total market value of almost $15 billion; according to Statista,  and this figure could rise to $210 billion by 2022. Augmented reality is having a huge impact on the games industry, but it’s being used by organizations in fields as diverse as publishing and retail.. For example, Layar is an app that turns static objects into live objects, while IKEA’s Catalog app lets you imagine how different types of furniture might fit into your room. But it’s not just about commerce: some apps have a distinctly educational bent, like Field Trip. Field Trip uses augmented reality to help users learn about the history that immediately surrounds them. The best augmented reality apps are always deceptively simple. But to build a really effective augmented reality application you need a diverse range of skills, that span both the domains of software and real-world physics. Let’s take a closer look at location-based augmented reality apps, including what they’re used for and how you can begin building them. How does location-based AR app work? Location-based augmented reality apps are sometimes called geo-based AR apps. Whatever you call them, one thing is important: they collate GPS mobile data and the digital compass to detect the location and position of the device. The application works like this: The AR app arranges queries to be dispatched to the sensor. Once the data has been acquired, the app can determine where it should add virtual information (such as images) should be added to the real world. Location-based augmented reality apps can be used both inside or outside. When inside and it isn’t possible to connect to GPS, the application will use beacons for location data. The best examples of existing location-based augmented reality apps While reading about location-based augmented reality apps can give you a good idea of how they work, to be really inspired, you need to try some out for yourself. Here’s a list of some of the best location-based augmented reality apps out there. Yelp Monocle Yelp Monocle helps you navigate an unknown city. Using GPS, it provides exactly the sort of information you’d expect from Yelp, but in a format that’s fully integrated with your surroundings. So, you can see restaurant reviews, shop opening hours as you move around your environment. Ingress Ingress is an augmented reality gaming app that immerses you in a (semi) virtual world. Your main mission is to find portals that the game ‘creates’ in your immediate environment and open them. Essentially, the game is a great way to explore the world around you and places a new augmented layer on a place that might otherwise be familiar. Vortex Planetarium Vortex Planetarium is an app for aspiring astronomers or anybody else with a passing interest in astronomy. The app detects the user’s location and then provides them with celestial data to better understand the night sky. Steps to create location-based AR app So, if you like the idea of a location-based augmented reality app, you’ll probably want to get started. As we’ve seen, these apps can be incredibly complex, but if you break the development process down, it should become much easier. 1. Determine what resources you need Depending on the complexity of your app, you need to determine what resources are needed - that could be anything from data to other frameworks and services will be required. For example, if you plan to create a game with 3D objects, you’ll need to use Unity to build in that level of functionality and realism. 2. Choose the right augmented reality tool There are a huge number of available augmented reality software development kits out there. However, rather than wade through every single one, here are some of the best to get started with. R SDKs, but we will list the most popular ones that can give you the widest range of possible features. AR Kit by Apple AR kit from Apple features just about everything you’d need to develop an augmented reality application, For example, it has a technology that allows combines both computer vision and camera data to track the user’s environment. AR Kit also is able to adjust the light level in the virtual model, to respond to the level of light in the real world. ARKit 2 recently brought users a number of cool new features. For example, it allows you to build interactivity into your application, and also allows you to build ‘memory’ into your app so it can ‘remember’ the location of augmented reality objects.ARCore by Google In Google’s ARCore you’ll find a mapping tool which is particularly useful for developing of location-based AR apps. ARCore can also track motion and detect vertical and horizontal surfaces. In the latest version of ARCore users can take two gadgets and work with one AR object from different viewing angles. 3. Geolocation data should be added Not all SDKs provide mapping feature. If it doesn’t, it’s essential to make sure you add in geolocation data. Without it, the app wouldn’t work! As we’ve already seen, GPS technology is typically used. It’s convenient and it can detect a user's location anywhere. It can, however, consume a lot of energy. Location services on iOS and Android will help to activate geolocation on the device. 3 augmented reality pitfalls to avoid Developing something as complex as a location-based augmented reality app is bound to lead to some challenges. So be prepared - watch for some of these pitfalls.. Ensure you have proper functionality. When users move with their camera and look for AR objects, these objects should remain static, regardless of the user’s movements. To do this, use SLAM - Simultaneous Localization and Mapping. This is a technique that allows software systems - like robots - ‘understand’ where they are situated in relation to their surroundings. Accuracy. A crucial factor for any AR app is accuracy. When developing your app, it’s essential to consider the user’s position to ensure that the app sends queries to sensors correctly. If it doesn’t the whole experience could seem plain weird for the user. Similarly, the distance between the device and the real world must be calculated correctly - again, if it isn’t your application simply will not work. Get started - build an awesome augmented reality app! Clearly, building a location-based augmented reality app isn’t easy. It requires skill and a commitment to keep going in the face of challenges. You certainly need a team of great developers around you if you’re going to deliver something that makes an impact. But, really, that’s what makes software development exciting, right? Author Bio Vitaly Kuprenko is a technical writer at Cleveroad. It's a web and mobile app development company in Ukraine. He enjoys telling about tech innovations and digital ways to boost businesses. Magic Leap unveils Mica, a human-like AI in augmented reality. Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality “As Artists we should be constantly evolving our technical skills and thought processes to push the boundaries on what’s achievable,” Marco Matic Ryan, Augmented Reality Artist
Read more
  • 0
  • 0
  • 13546

article-image-healthcare-analytics-logistic-regression-to-reduce-patient-readmissions
Guest Contributor
20 Dec 2017
8 min read
Save for later

Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions

Guest Contributor
20 Dec 2017
8 min read
[box type="info" align="" class="" width=""]We bring to you another guest post by Benjamin Rojogan on Logistic regression to aid healthcare sector in reducing patient readmission. Ben's previous post on ensemble methods to optimize machine learning models is also available for a quick read here.[/box] ER visits are not cheap for any party involved. Whether this be the patient or the insurance company. However, this does not stop some patients from being regular repeat visitors. These recurring visits are due to lack of intervention for problems such as substance abuse, chronic diseases and mental illness. This increases costs for everybody in the healthcare system and reduces quality of care by playing a role in the overflowing of Emergency Departments (EDs). Research teams at UW and other universities are partnering with companies like Kensci to figure out how to approach the problem of reducing readmission rates. The ability to predict the likelihood of a patient’s readmission will allow for targeted intervention which in turn will help reduce the frequency of readmissions. Thus making the population healthier and hopefully reducing the estimated 41.3 billion USD healthcare costs for the entire system. How do they plan to do it? With big data and statistics, of course. A plethora of algorithms are available for data scientists to use to approach this problem. Many possible variables could affect the readmission and medical costs. Also, there are also many different ways researchers might pose their questions. However, the researchers at UW and many other institutions have been heavily focused on reducing the readmission rate simply by trying to calculate whether a person would or would not be readmitted. In particular, this team of researchers was curious about chronic ailments. Patients with chronic ailments are likely to have random flare ups that require immediate attention. Being able to predict if a patient will have an ER visit can lead to managing the cause more effectively. One approach taken by the data science team at UW as well as the Department of Family and Community Medicine at the University of Toronto was to utilize logistic regression to predict whether or not a patient would be readmitted. Patient readmission can be broken down into a binary output: either the patient is readmitted or not. As such logistic regression has been a useful model in my experience to approach this problem. Logistic Regression to predict patient readmissions Why do data scientists like to use logistic regression? Where is it used? And how does it compare to other data algorithms? Logistic regression is a statistical method that statisticians and data scientists use to classify people, products, entities, etc. It is used for analyzing data that produces a binary classification based on one or many independent variables. This means, it produces two clear classifications (Yes or No, 1 or 0, etc). With the example above, the binary classification would be: is the patient readmitted or not? Other examples of this could be whether to give a customer a loan or not, whether a medical claim is fraud or not, whether a patient has diabetes or not. Despite its name, logistic regression does not provide the same output like linear regression (per se). There are some similarities, for instance, the linear model is somewhat consistent as you might notice in the equation below where you see what is very similar to a linear equation. But the final output is based on the log odds. Linear regression and multivariate regression both take one to many independent variables and produce some form of continuous function. Linear regression could be used to predict the price of a house, a person’s age or the cost of a product an e-commerce should display to each customer. The output is not limited to only a few discrete classifications. Whereas logistic regression produces discrete classifiers. For instance, an algorithm using logistic regression could be used to classify whether or not a certain stock price would be either >$50 a share or <$50 a share. Linear regression would be used to predict if a stock share would be worth $50.01, $50.02….etc. Logistic regression is a calculation that uses the odds of a certain classification. In the equation above, the symbol you might know as pi actually represents the odds or probability. To reduce the error rate, we should predict Y = 1 when p ≥ 0.5 and Y = 0 when p < 0.5. This creates a linear classifier, a boundary that when the coefficients β0 + x · β has a p value that is p < 0.5 then Y = 0. By generating coefficients that help predict the logit transformation, the method allows to classify for the characteristic of interest. Now that is a lot of complex math mumbo jumbo. Let’s try to break it down into simpler terms. Probability vs. Odds Let’s start with probability. Let’s say a patient has the probability of 0.6 of being readmitted. Then the probability that the patient won’t be readmitted is .4. Now, we want to take this and convert it into odds. This is what the formula above is doing. You would take .6/.4 and get odds of 1.5. That means the odds of the patient being readmitted are 1.5 to 1. If instead the probability was .5 for both being readmitted and not being readmitted, then the odds would be 1:1. Now the next step in the logistic regression model would be to take the odds and get the “Log odds”. You do this by taking the 1.5 and put it into the log portion of the equation. Now you will get .18(rounded). In logistic regression, we don’t actually know p. That is what we are trying to essentially find and model using various coefficients and input variables. Each input provides a value that changes how much more likely an event will or will not occur. All of these coefficients are used to calculate the log odds. This model can take multiple variables like age, sex, height, etc. and specify how much of an effect each variable has on the odds an event will occur. Once the initial model is developed, then comes the work of deciding its value. How does a business go from creating an algorithm inside a computer and translate it into action. Some of us like to say the “computers” are the easy part. Personally I find the hard part to be the “people”. After all, at the end of the day, it comes down to business value. Will an algorithm save money or not? That means it has to be applied in real life. This could take the form of a new initiative, strategy, product recommendation, etc. You need to find the outliers that are worth going after! For instance, if we go back to the patient readmission example again. The algorithm points out patients with high probabilities of being readmitted. However if the readmission costs are low, they will probably be ignored..sadly. That is how businesses (including hospitals) look at problems. Logistic regression is a great tool for binary classification. It is unlike many other algorithms that estimate continuous variables or estimate distributions. This statistical method can be utilized to classify whether a person will be likely to get cancer because of environmental variables like proximity to a highway, smoking habits, etc? This method has been used effectively in the medical, financial and insurance industry successfully for a while. Knowing when to use what algorithm takes time. However, the more problems a data scientist faces, the faster they will recognize whether to use logistic regression or decision trees. Using logistic regression provides the opportunity for healthcare institutions to accurately target at risk individuals who should receive a more tailored behavioral health plan to help improve their daily health habits. This in turn opens the opportunity for better health for patients and lower costs for hospitals. [box type="shadow" align="" class="" width=""] About the Author Benjamin Rogojan Ben has spent his career focused on healthcare data. He has focused on developing algorithms to detect fraud, reduce patient readmission and redesign insurance provider policy to help reduce the overall cost of healthcare. He has also helped develop analytics for marketing and IT operations in order to optimize limited resources such as employees and budget. Ben privately consults on data science and engineering problems both solo as well as with a company called Acheron Analytics. He has experience both working hands-on with technical problems as well as helping leadership teams develop strategies to maximize their data.[/box]
Read more
  • 0
  • 0
  • 13458
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-a-machine-learning-roadmap-for-web-developers
Sugandha Lahoti
27 Aug 2018
7 min read
Save for later

A Machine learning roadmap for Web Developers

Sugandha Lahoti
27 Aug 2018
7 min read
Now that you’ve opened this article, I’ll assume you’re a web developer who is all excited with the prospect of building a machine learning project. You may be here for one of these reasons. Either you have been in a circle of people who find web development is dying? (Is it really dying or just unwell?). Or maybe you are stagnating in your current trajectory. And so, you want to learn something different, something trending, something like Artificial Intelligence. Or you/your employer/your client is aware of the capabilities of machine learning and want to include it in some part of your web app to make it more powerful. Or like the majority of the folks, you just want to see first hand if all the fuss about artificial intelligence is really worth all the effort to switch gears, by building a side toy ML project. Either way, there are different approaches to fulfill these needs. Learning Machine Learning for the Web with Javascript Learning machine learning coming from a web development background comes with its own constraints. You might worry about having to learn entirely different concepts from scratch - from different algorithms to programming languages like Python to mathematical concepts like linear algebra, calculus, and statistics. However, chances are you can skip learning a new language. You probably know some Javascript in some form or the other thanks to your web development experience. As such, you can learn Machine Learning in JavaScript (You don’t have to learn another programming language from scratch) and take it right to your browsers with WebGL. There are some advantages to using JavaScript for ML. Its popularity is one; while ML in JavaScript is not as popular as Python’s ML ecosystem, at the moment, the language itself is. As demand for ML applications rises, and as hardware becomes faster and cheaper, it's only natural for machine learning to become more prevalent in the JavaScript world. The JavaScript ecosystem offers a rich set of libraries suited for most Machine Learning tasks. Math: math.js Data Analysis: d3.js Server: node.js (express, koa, hapi) Performance: Tensorflow.js (e.g. GPU accelerated via WebGL API in the browser), Keras.js etc. Read also: 5 JavaScript machine learning libraries you need to know BRIIM is a good collection of materials to get you started as web developer or JavaScript enthusiast in machine learning. In case you’re interested in learning Python instead of Javascript, here are the set of libraries you should pick. Math: numpy Data Analysis: Pandas Data Mining: PySpark Server: Flask, Django Performance: TensorFlow (because it is written with a Python API over a C/C++ engine) or Keras (sits on top of TensorFlow). Using Machine Learning as a service If you don’t want to spend your time learning frameworks, tools, and languages suited for machine learning, you can adopt Machine Learning as a service or MLaaS. These services provide machine learning tools as part of cloud computing services. So basically, you can benefit from machine learning without the allied cost, time and risk of establishing an in-house internal machine learning team. All you need is sufficient knowledge of incorporating APIs. All Machine Learning tasks including data pre-processing, model training, model evaluation, and predictions can be completed through MLaaS. Read also: How machine learning as a service is transforming cloud A large number of companies provide Machine Learning as a service. Most prominent ones include: Amazon Machine Learning Amazon ML makes it easy for web developers to build smart applications using simple APIs. This includes applications for fraud detection, demand forecasting, targeted marketing, and click prediction. They provide a Developer Guide, which provides a conceptual overview of Amazon ML and includes detailed instructions for using the service. They also have a API reference, which describes all the API operations and provides sample requests and responses for supported web service protocols. Azure ML web app templates The web app templates available in the Azure Marketplace can build a custom web app that knows your web service's input data and expected results. All you need to do is give the web app access to your web service and data, and the template does the rest. There are two available templates: Azure ML Request-Response Service Web App Template Azure ML Batch Execution Service Web App Template Each template creates a sample ASP.NET application by using the API URI and key for your web service. The template then deploys the application as a website to Azure. No coding is required to use these templates. You just supply the API key and URI, and the template builds the application for you. Google Cloud based APIs Google also provides machine learning services, with pre-trained models and a service to generate your own tailored models. Google’s Cloud AutoML is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs. Cloud AutoML is used by Disney on their website shopDisney to enhance guest experience through more relevant search results, expedited discovery, and product recommendations. Building Conversational Interfaces As a web developer, another thing you might be looking into, is developing conversational interfaces or chatbots to enhance your web apps. Amazon, Google, and Microsoft provide Machine learning powered tools to help developers with building their own chatbots. Amazon Lex You can embed chatbots in your web apps with the Amazon Lex featuring ASR (Automatic Speech Recognition) and NLP (Natural Language Processing) capabilities. The API can recognize written and spoken text and the Lex interface allows you to hook the recognized inputs to various back-end solutions. Lex currently supports deploying chatbots for Facebook Messenger, Slack, and Twilio. Google Dialogflow Google’s Dialogflow can build voice and text-based conversational interfaces, such as voice apps and chatbots, powered by AI. Dialogflow incorporates Google's machine learning expertise and products such as Google Cloud Speech-to-Text. The API can be tweaked and customized for needed intents using Java, Node.js, and Python. It is also available as an enterprise edition. Microsoft Azure Cognitive Services Microsoft Cognitive Services simplify a variety of AI-based tasks, giving you a quick way to add intelligence technologies to your bots with just a few lines of code. It provides tools and APIs for aiding the development of conversational interfaces. These include: Translator Speech API Bing Speech API to convert text into speech and speech into text Speaker Recognition API for voice verification tasks Custom Speech Service to apply Azure NLP capacities using own data and models Language Understanding Intelligent Service (LUIS) is an API that analyzes intentions in text to be recognized as commands Text Analysis API for sentiment analysis and defining topics Bing Spell Check Translator Text API Web Language Model API that estimates probabilities of words combinations and supports word autocompletion Linguistic Analysis API used for sentence separation, tagging the parts of speech, and dividing texts into labeled phrases Read also: Top 4 chatbot development frameworks for developers These tools should be enough to get your feet off the ground quickly and move into the specific area of machine learning. Ultimately your choice of tool relies on the kind of application you want to build, your level of expertise, and how much time and effort you’re willing to put to learn. Obviously, depending on your area of choice, you would have to do more research and develop yourself in those areas. How should web developers learn machine learning? 5 examples of Artificial Intelligence in Web apps The most valuable skills for web developers to learn in 2018
Read more
  • 0
  • 0
  • 13310

article-image-how-artificial-intelligence-can-improve-pentesting
Melisha Dsouza
21 Oct 2018
8 min read
Save for later

How artificial intelligence can improve pentesting

Melisha Dsouza
21 Oct 2018
8 min read
686 cybersecurity breaches were reported in the first three months of 2018 alone, with unauthorized intrusion accounting for 38.9% of incidents. And with high-profile data breaches dominating headlines, it’s clear that while modern, complex software architecture might be more adaptable and data-intensive than ever, securing that software is proving a real challenge. Penetration testing (or pentesting) is a vital component within the cybersecurity toolkit. In theory, it should be at the forefront of any robust security strategy. But it isn’t as simple as just rolling something out with a few emails and new software - it demands people with great skills, as well a culture where stress testing and hacking your own system is viewed as a necessity, not an optional extra. This is where artificial intelligence comes in - the automation that you can achieve through artificial intelligence could well help make pentesting much easier to do consistently and at scale. In turn, this would help organizations tackle both issues of skills and culture, and get serious about their cybersecurity strategies. But before we dive deeper into artificial intelligence and pentesting, let’s take a look at where we are now, and the shortcomings of established pentesting methods. The shortcomings of established methods of pentesting Typically, pentesting is carried out in 5 stages: Source: Incapsula Every one of these stages, when carried out by humans, opens up the chance of error. Yes, software is important, but contextual awareness and decisions are required.. This process, then, provides plenty of opportunities for error. From misinterpreting data - like thinking a system is secure, when actually it isn’t - to taking care of evidence and thoroughly and clearly recording the results of pentests, even the most experienced pentester will get things wrong. But even if you don’t make any mistakes, this whole process is hard to do well at scale. It requires a significant amount of time and energy to test a piece of software, which, given the pace of change created by modern processes, makes it much harder to maintain the levels of rigor you ultimately want from pentesting. This is where artificial intelligence comes in. The pentesting areas that artificial intelligence can impact Let’s dive into the different stages of pentesting that AI can impact. #1 Reconnaissance Stage The most important stage in pentesting is the Reconnaissance or information gathering stage. As rightly said by many in cybersecurity, "The more information gathered, the higher the likelihood of success." Therefore, a significant amount of time should be spent obtaining as much information as possible about the target. Using AI to automate this stage would provide accurate results as well as save a lot of time invested. Using a combination of Natural Language Processing, Computer Vision, and Artificial Intelligence, experts can identify a wide variety of details that can be used to build a profile of the company, its employees, the security posture, and even the software/hardware components of the network and computers. #2 Scanning Stage Comprehensive coverage is needed In the scanning phase. Manually scanning through thousands if systems in an organization is not ideal. NNor is it ideal to interpret the results returned by scanning tools. AI can be used to tweak the code of the scanning tools to scan systems as well as interpret the results of the scan. It can help save pentesters time and help in the overall efficiency of the pentesting process. AI can focus on test management and the creation of test cases automatically that will check if a particular program can be tagged having security flaw. They can also be used to check how a target system responds to an intrusion. #3 Gaining and Maintaining access stage Gaining access phase involves taking control of one or more network devices in order to either extract data from the target, or to use that device to then launch attacks on other targets. Once a system is scanned for vulnerabilities, the pentesters need to ensure that the system does not have any loopholes that attackers can exploit to get into the network devices. They need to check that the network devices are safely protected with strong passwords and other necessary credentials. AI-based algorithms can try out different combinations of passwords to check if the system is susceptible for a break-in. The algorithms can be trained to observe user data, look for trends or patterns to make inferences about possible passwords used. Maintaining access focuses on establishing other entry points to the target. This phase is expected to trigger mechanisms, to ensure that the penetration tester’s security when accessing the network. AI-based algorithms should be run at equal intervals to time to guarantee that the primary path to the device is closed. The algorithms should be able to discover backdoors, new administrator accounts, encrypted channels, new network access channels, and so on. #4 Covering Tracks And Reporting The last stage tests whether an attacker can actually remove all traces of his attack on the system. Evidence is most often stored in user logs, existing access channels, and in error messages caused by the infiltration process. AI-powered tools can assist in the discovery of hidden backdoors and multiple access points that haven't been left open on the target network; All of these findings should be automatically stored in a report with a proper timeline associated with every attack done. A great example of a tool that efficiently performs all these stages of pentesting is CloudSEK’s X-Vigil. This tool leverages AI to extract data, derive analysis and discover vulnerabilities in time to protect an organization from data breach. Manual vs automated vs AI-enabled pentesting Now that you have gone through the shortcomings of manual pen testing and the advantages of AI-based pentesting, let’s do a quick side-by-side comparison to understand the difference between the two.   Manual Testing Automated Testing AI enabled pentesting Manual testing is not accurate at all times due to human error This is more likely to return false positives AI enabled pentesting is accurate as compared to automated testing Manual testing is time-consuming and takes up human resources.   Automated testing is executed by software tools, so it is significantly faster than a manual approach.   AI enabled testing does not consume much time. The algorithms can be deployed for thousands of systems at a single instance. Investment is required for human resources.   Investment is required for testing tools. AI will save the investment for human resources in pentesting. Rather, the same employees can be used to perform less repetitive and more efficient tasks Manual testing is only practical when the test cases are run once or twice, and frequent repetition is not required..   Automated testing is practical when tools find test vulnerabilities out of programmable bounds AI-based pentesting is practical in organizations with thousands of systems that need to be tested at once to save time and resources.   AI-based pentesting tools Pentoma is an AI-powered penetration testing solution that allows software developers to conduct smart hacking attacks and efficiently pinpoint security vulnerabilities in web apps and servers. It identifies holes in web application security before hackers do, helping prevent any potential security damages. Pentoma analyzes web-based applications and servers to find unknown security risks.In Pentoma, with each hacking attempt, machine learning algorithms incorporate new vulnerability discoveries, thus continuously improving and expanding threat detection capability. Wallarm Security Testing is another AI based testing tool that discovers network assets, scans for common vulnerabilities, and monitors application responses for abnormal patterns. It discovers application-specific vulnerabilities via Automated Threat Verification. The content of a blocked malicious request is used to create a sanitized test with the same attack vector to see how the application or its copy in a sandbox would respond. With such AI based pentesting tools, pentesters can focus on the development process itself, confident that applications are secured against the latest hacking and reverse engineering attempts, thereby helping to streamline a product’s time to market. Perhaps it is the increase in the number of costly data breaches or the continually expanding attack and proliferation of sensitive data and the attempt to secure them with increasingly complex security technologies that businesses lack in-house expertise to properly manage. Whatever be the reason, more organizations are waking up to the fact that if vulnerabilities are not caught in time can be catastrophic for the business. These weaknesses, which can range from poorly coded web applications, to unpatched databases to exploitable passwords to an uneducated user population, can enable sophisticated adversaries to run amok across your business.  It would be interesting to see the growth of AI in this field to overcome all the aforementioned shortcomings. 5 ways artificial intelligence is upgrading software engineering Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 8 ways Artificial Intelligence can improve DevOps
Read more
  • 0
  • 0
  • 13233

article-image-what-is-react-js-how-does-it-work
Packt
05 Mar 2018
9 min read
Save for later

What is React.js and how does it work?

Packt
05 Mar 2018
9 min read
What is React.js? React.js is one of the most talked about JavaScript web frameworks in years. Alongside Angular, and more recently Vue, React is a critical tool that has had a big impact on the way we build web applications. But it's hard to find a better description of React.js than the single sentence on the project's home page: A JavaScript library for building user interfaces. It's a library. For building user interfaces. This is perfect because, more often than not, this is all we want. The best part about this description is that it highlights React's simplicity. It's not a mega framework. It's not a full-stack solution that's going to handle everything from the database to real-time updates over web socket connections. We don't actually want most of these pre-packaged solutions, because in the end, they usually cause more problems than they solve. Facebook sure did listen to what we want. This is an extract from React and React Native by Adam Boduch. Learn more here. React.js is just the view. That's it. React.js is generally thought of as the view layer in an application. You might have used library like Handlebars, or jQuery in the past. Just as jQuery manipulates UI elements, or Handlebars templates are inserted onto the page, React components change what the user sees. The following diagram illustrates where React fits in our frontend code. This is literally all there is to React. We want to render this data to the UI, so we pass it to a React component which handles the job of getting the HTML into the page. You might be wondering what the big deal is. On the surface, React appears to be another rendering technology. But it's much more than that. It can make application development incredibly simple. That's why it's become so popular. React.js is simple React doesn't have many moving parts for us to learn about and understand. The advantage to having a small API to work with is that you can spend more time familiarizing yourself with it, experimenting with it, and so on. The opposite is true of large frameworks, where all your time is devoted to figuring out how everything works. The following diagram gives a rough idea of the APIs that we have to think about when programming with React. React is divided into two major APIs. First, there's the React DOM. This is the API that's used to perform the actual rendering on a web page. Second, there's the React component API. These are the parts of the page that are actually rendered by React DOM. Within a React component, we have the following areas to think about: Data: This is data that comes from somewhere (the component doesn't care where), and is rendered by the component. Lifecycle: These are methods that we implement that respond to changes in the lifecycle of the component. For example, the component is about to be rendered. Events: This is code that we write for responding to user interactions. JSX: This is the syntax of React components used to describe UI structures. Don't fixate on what these different areas of the React API represent just yet. The takeaway here is that React is simple. Just look at how little there is to figure out! This means that we don't have to spend a ton of time going through API details here. Instead, once you pick up on the basics, you can spend more time on nuanced React usage patterns. React has a declarative UI structure React newcomers have a hard time coming to grips with the idea that components mix markup in with their JavaScript. If you've looked at React examples and had the same adverse reaction, don't worry. Initially, we're all skeptical of this approach, and I think the reason is that we've been conditioned for decades by the separation of concerns principle. Now, whenever we see things mixed together, we automatically assume that this is bad and shouldn't happen. The syntax used by React components is called JSX (JavaScript XML). The idea is actually quite simple. A component renders content by returning some JSX. The JSX itself is usually HTML markup, mixed with custom tags for the React components. What's absolutely groundbreaking here is that we don't have to perform little micro-operations to change the content of a component. For example, think about using something like jQuery to build your application. You have a page with some content on it, and you want to add a class to a paragraph when a button is clicked. Performing these steps is easy enough, but the challenge is that there are steps to perform at all. This is called imperative programming, and it's problematic for UI development. While this example of changing the class of an element in response to an event is simple, real applications tend to involve more than 3 or 4 steps to make something happen. Read more: 5 reasons to learn React React components don't require executing steps in an imperative way to render content. This is why JSX is so central to React components. The XML-style syntax makes it easy to describe what the UI should look like. That is, what are the HTML elements that this component is going to render? This is called declarative programming, and is very well-suited for UI development. Time and data Another area that's difficult for React newcomers to grasp is the idea that JSX is like a static string, representing a chunk of rendered output. Are we just supposed to keep rendering this same view? This is where time and data come into play. React components rely on data being passed into them. This data represents the dynamic aspects of the UI. For example, a UI element that's rendered based on a Boolean value could change the next time the component is rendered. Here's an illustration of the idea. Each time the React component is rendered, it's like taking a snapshot of the JSX at that exact moment in time. As our application moves forward through time, we have an ordered collection of rendered user interface components. In addition to declaratively describing what a UI should be, re-rendering the same JSX content makes things much easier for developers. The challenge is making sure that React can handle the performance demands of this approach. Performance matters with React Using React to build user interfaces means that we can declare the structure of the UI with JSX. This is less error-prone than the imperative approach to assembling the UI piece by piece. However, the declarative approach does present us with one challenge—performance. For example, having a declarative UI structure is fine for the initial rendering, because there's nothing on the page yet. So the React renderer can look at the structure declared in JSX, and render it into the browser DOM. This is illustrated below. On the initial render, React components and their JSX are no different from other template libraries. For instance, Handlebars will render a template to HTML markup as a string, which is then inserted into the browser DOM. Where React is different from libraries like Handlebars is when data changes, and we need to re-render the component. Handlebars will just rebuild the entire HTML string, the same way it did on the initial render. Since this is problematic for performance, we often end up implementing imperative workarounds that manually update tiny bits of the DOM. What we end up with is a tangled mess of declarative templates, and imperative code to handle the dynamic aspects of the UI. We don't do this in React. This is what sets React apart from other view libraries. Components are declarative for the initial render, and they stay this way even as they're re-rendered. It's what React does under the hood that makes re-rendering declarative UI structures possible. React has something called the virtual DOM, which is used to keep a representation of the real DOM elements in memory. It does this so that each time we re-render a component, it can compare the new content, to the content that's already displayed on the page. Based on the difference, the virtual DOM can execute the imperative steps necessary to make the changes. So not only do we get to keep our declarative code when we need to update the UI, React will also make sure that it's done in a performant way. Here's what this process looks like: When you read about React, you'll often see words like diffing and patching. Diffing means comparing old content with new content to figure out what's changed. Patching means executing the necessary DOM operations to render the new content React.js has the right level of abstraction React.js doesn't have a great deal of abstraction, but the abstractions the framework does implement are crucial to its success. In the preceding section, you saw how JSX syntax translates to the low-level operations that we have no interest in maintaining. The more important way to look at how React translates our declarative UI components is the fact that we don't necessarily care what the render target is. The render target happens to be the browser DOM with React. But this is changing. We're only just starting to see this with React Native, but the possibilities are endless. I personally will not be surprised when React Toast becomes a thing, targeting toasters that can singe the rendered output of JSX on to bread. The abstraction level with React is at the right level, and it's in the right place. The following diagram gives you an idea of how React can target more than just the browser. From left to right, we have React Web (just plain React), React Native, React Desktop, and React Toast. As you can see, to target something new, the same pattern applies: Implement components specific to the target Implement a React renderer that can perform the platform-specific operations under the hood Profit This is obviously an oversimplification of what's actually implemented for any given React environment. But the details aren't so important to us. What's important is that we can use our React knowledge to focus on describing the structure of our user interface on any platform. Disclaimer: React Toast will probably never be a thing, unfortunately.
Read more
  • 0
  • 0
  • 13154

article-image-declarative-ui-programming-faceoff-apples-swiftui-vs-googles-flutter
Guest Contributor
14 Jun 2019
5 min read
Save for later

Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter

Guest Contributor
14 Jun 2019
5 min read
Apple recently announced a new declarative UI framework for its operating system - SwiftUI, at its annual developer conference WWDC 2019. SwiftUI will power all of Apple’s devices (MacBooks, watches, tv’s, iPads and smartphones). You can integrate SwiftUI views with objects from the UIKit, AppKit, and WatchKit frameworks to take further advantage of platform-specific functionality. It's said to be productive for developers and would save effort while writing codes. SwiftUI documentation,  states that, “Declare the content and layout for any state of your view. SwiftUI knows when that state changes, and updates your view’s rendering to match.”   This means that the developers simply have to describe the current UI state to the response of events and leave the in-between transitions to the framework. The UI updates the state automatically as it changes. Benefits of a Declarative UI language Without describing the control flow, the declarative UI language expresses the logic of computation. You describe what elements you need and how they would look like without having to worry about its exact position and its visual style. Some of the benefits of Declarative UI language are: Increased speed of development. Seamless integration between designers and coders. Forces separation between logic and presentation.    Changes in UI don’t require recompilation SwiftUI’s declarative syntax is quite similar to Google’s Flutter which also runs on declarative UI programming. Flutter contains beautiful widgets with captivating logos, fonts, and expressive style. The use of Flutter has significantly increased in 2019 and is among the fastest developing skills in the developer community. Similar to Flutter, SwiftUI provides layout structure, controls, and views for the application’s user interface. This is the first time Apple’s stepping up to the declarative UI programming and has described SwiftUI as a modern way to declare user interfaces. In the imperative method, developers had to manually construct a fully functional UI entity and later change it using methods and setters. In SwiftUI the application layout just needs to be described once, vastly reducing the code complexity. Apart from declarative UI, SwiftUI also features Xcode, which contains software development tools and is an integrated development environment for the OS.  If any code modifications are made inside Xcode, developers now can preview the codes in real-time and tweak parameters. Swift UI also features dark mode, drag and drop building tools by Xcode and interface layout.  Languages such as Hebrew and Arabic are also incorporated. However, one of the drawbacks of SwiftUI is that it will only support apps that will continue to relay forward with iOS13. It’s a sort of limited tool in this sense and the production would take at least a year or two if an older iOS version is to be supported. SwiftUI vs Flutter Development   Apple’s answer to Google is simple here. Flutter is compatible with both Android and iOS whereas SwiftUI is a new member of Apple’s ecosystem. Developers use Flutter for cross-platform apps with a single codebase. It highlights that Flutter is pushing other languages to adopt its simplistic way of developing UI. Now with the introduction of SwiftUI, which works on the same mechanism as Flutter, Apple has announced itself to the world of declarative UI programming. What does it mean for developers who build exclusively for iOS? Well, now they can make Native Apps for their client’s who do not prefer the Flutter way. SwiftUI will probably reduce the incentive for Apple-only developers to adopt Flutter. Many have pointed out that Apple has just introduced a new framework for essentially the same UI experience. We have to wait and see what Swift UI has under its closet for the longer run. Developers in communities like Reddit and others are actively sharing their thoughts on the recent arrival of SwiftUI. Many agree on the fact that “SwiftUI is flutter with no Android support”.   Developers who’d target “Apple only platform” through SwiftUI, will eventually return to Flutter to target all other platforms, which makes Flutter could benefit from SwiftUI and not the other way round. The popularity of the react native is no brainer. Native mobile app development for iOS and Android is always high on cost and companies usually work with 2 different sets of teams. Cross-platform solutions drastically bridge the gaps in terms of developmental costs. One could think of Flutter as React native with the full support of native features (one doesn’t have to depend on native platforms for solutions and Flutter renders similar performance to native). Like React Native, Flutter uses reactive-style views. However, while React Native transpiles to native widgets, Flutter compiles all the way to native code. Conclusion SwiftUI is about making development interactive, faster and easier. The latest inbuilt graphical UI design tool allows designers to assemble a user interface without having to write any code. Once the code is modified, it instantly appears in the visual design tool. Codes can be assembled, redefined and tested in real time with previews that could run on a range of Apple's devices. However, SwiftUI is still under development and will take its time to mature. On the other hand, Flutter app development services continue to deliver scalable solutions for startups/enterprises. Building native apps are not cheap and Flutter with the same feel of native provides cost-effective services. It still remains a competitive cross-platform network with or without SwiftUI’s presence. Author Bio Keval Padia is the CEO of Nimblechapps, a prominent Mobile app development company based in India. He has a good knowledge of Mobile App Design and User Experience Design. He follows different tech blogs and current updates of the field lure him to express his views and thoughts on certain topics.
Read more
  • 0
  • 0
  • 13111
article-image-whats-new-in-vr-haptics
Natasha Mathur
16 Jul 2018
8 min read
Save for later

What’s new in VR Haptics?

Natasha Mathur
16 Jul 2018
8 min read
Virtual Reality is evolving at a staggering rate. Some of the humankind’s most exciting tools and technologies are coming to the Virtual reality Space. One such technology which is taking over the VR world and making it more powerful is the VR haptics technology. VR Haptics technology offers an extra dimension to the VR world by letting users feel the virtual environment via the sense of touch, in addition to visual and aural perception. It makes you feel truly immersive in the artificial world. Imagine yourself in a desert seeing the sand and feeling it glide under your feet as you walk. It uses external devices like Gloves, Shoes, Joysticks, etc, via which users can receive feedback in the form of vibrations from these computer applications. This feedback provides physical sensations in the hand or other parts of the body. It also provides a realistic simulation of the movements and behaviors, similar to those realized in the real world. VR Haptics: a growing domain The VR haptics technology is growing beyond creating vibrations in game controllers. Now, in the near future, you might able to cuddle a dog and feel it licking your face in the VR world. This speaks volumes about the pace at which the haptic technology is growing. One famous example which discusses modern VR is the popular sci-fi novel “Ready Player One”. It illustrates the possibilities of haptic technology in the future. The novel explores the journey of a guy as he sets foot into a virtual reality simulator (OASIS). He uses a headset and a pair of gloves to maneuver around the virtual world. Apart from the gloves, a lot of future concept products are also covered in the novel which makes the illusion of immersion easier to picture, such as towers emitting smells in the VR world and Wind/Temperature generators that mimic real-life. Haptics came about just as head mounted displays (HMD) came to light in the 2010s. HMDs allowed people to see the virtual reality while haptic feedback gave people the opportunity to experience the virtual world and to act within it. Texture, temperature, pressure, taste, smell and other non-visual sensory inputs became real in VR. Apart from virtual reality games and apps, Haptics feedback is used widely in personal computers, mobile devices, robots, and more. But, in this article, we’ll stick to the use of haptic technology or haptic feedback in the VR space. Usually, most VR users use Touch Controllers for haptic feedback. But, recently, a lot of third-party companies are coming out with products such as gloves for systems like the Oculus Rift & HTC Vive. Here is a list of recent developments in the haptic technology for the VR world. Super affordable VR Haptic gloves by Plexus Most of the currently available options in the VR haptics field are somewhat pricey but earlier this month, Plexus announced their new product, a VR haptic and sensor glove. https://vimeo.com/276517370 Source: Plexus Key features Plexus VR haptics gloves offer a fully modular tracking solution which is capable of tracking up to 0.01 degrees of precision. These gloves are capable of individual finger tracking as well as tracking each joint on the finger, thereby, offering higher precision in the VR world. It is compatible with the HTC Vive, Oculus Rift as well as Windows Mixed Reality devices. The VR haptic gloves also come with additional adapter plates. The development kit version of the Plexus haptic gloves, priced at $249 per glove pair, can be pre-ordered on the official Plexus Website. The company will begin shipping in August 2018 but at the moment, shipping is only available to USA, Europe, Canada and Australia. Kaaya Tech’s full body tracking HoloSuit Kaaya came out with a motion capture (MoCap) suit called HoloSuit, last month, which offers motion capture as well as haptic feedback. HoloSuit is the world’s first affordable, wireless, easy to use, bi-directional, full body motion capture suit. User’s entire body movement data is captured by Holosuit and it uses haptic feedback to send information back to the user. https://www.youtube.com/watch?v=SEQsDR32gII&t=122s  Source: HoloSuit It can be used in various areas such as sports, healthcare, education, entertainment or industrial operations. Key Features The HoloSuit consists of 36 embedded sensors in the pro version and 26 embedded sensors in the less complex version. Embedded sensors carry out all the work of capturing body motion which is necessary for world-scale tracking. It also consists of 9 haptic feedback devices, and 6 embedded firing buttons ( buttons that govern specific tasks such as saving the game, pausing, etc ) which are dispersed across both arms, legs, and all the ten fingers. It delivers data wirelessly either through Wifi or Bluetooth LE to a VR setup by using Unity or a Wi-Fi SDK. The HoloSuit doesn’t come with an external camera tracking option. It supports all the major platforms such as Windows, macOS, iOS, and Android devices. A complete HoloSuit is quite expensive and starts at a regular price of $999. Jacket and Jersey are priced at $499, jersey or track pants for $399, and a pair of gloves are available for $799. HoloSuit Pro is priced at $1,599. Shipping for the full body VR haptic HoloSuit will start this November. Disney’s VR Haptic “Force Jacket” Disney came out with their VR haptic jacket, namely, “Force Jacket” back in April. It provides users with precisely directed force along with a high-frequency vibration which is felt against the user’s upper body in sync with the visual medium. The prototype is made out of a converted life jacket and is provided with 26 airbags. https://www.youtube.com/watch?v=5BOFHEow608   Source: DisneyResearchHub The Force Jacket is created by engineers at Disney Research, MIT and Carnegie Mellon University. Key Features The Haptic Jacket uses an air compressor and a vacuum pump. These air compartments in the jacket can be inflated to exert a force on the user’s body relative to force sensitive resistors. 26 air compartments are activated using microcontrollers for either pressure or vibrotactile feedback or both. Controllers are used to activating the solenoid valves which are connected to the vacuum. There are certain Jacket inflation parameters like speed, force, and duration which are specified using the haptic effects editor. The jacket makes use of the motion interface to sequentially inflate the compartments for simulating motion across the body. Each airbag within the haptic jacket can be influenced to mimic sensations such as being hit in the chest by a snowball, getting tapped on the shoulder, lime dripping on their back, getting punched in the side, and a snake coiling its body around the user. The jacket is mainly to be used in the entertainment and gaming industry and is not available for the consumer market. But, it seems to have great potential in the future for other applications as well. VR gloves by Haptx Haptx announced a pair of VR gloves back in November of last year. The gloves use micro-pneumatics technology for detailed haptics and force feedback (the ability to restrict your fingers’ movement to simulate holding objects) in the fingers. https://www.youtube.com/watch?v=2C2_kbjtjRU Source: HaptX Key Features It features technology that enables it to provide 100 points of tactile displacement feedback. It offers up to five pounds of resistance per finger. It also comes with sub-millimeter precision motion tracking The glove uses SDK of HaptX’s design, which is created by using Unreal Engine’s physics system. This tells the glove when and where it needs to apply haptic effects as well as when and how to engage the force feedback. No information on pricing or worldwide availability has been released by the company yet. But, it is rumored to launch the VR gloves for the consumer market sometime later this year. Apart from these products, there are other minor advancements that keep happening in the VR haptics space. For example, Heather Culbertson, Assistant Professor of USC's computer department, recently created a haptic armband which is capable of mimicking the sensation of a human touch. VR aims to provide you with an environment where you feel truly immersive and where you can feel the objects as in the real world. These products are bringing the VR world a step closer to achieve richer levels of immersive experiences. Gone are the days when haptic feedback was limited to just vibrating controllers and joysticks. As the technology advances, the whole new world of VR haptic devices is here to make your VR experience as seamlessly immersive as possible. In fact, some people even believe that without Haptics, VR is nothing but a picture and a sound. Game developers say Virtual Reality is here to stay CTA announces its first AR/VR Standard terminology Top 7 modern Virtual Reality hardware systems  
Read more
  • 0
  • 0
  • 12960

article-image-what-can-artificial-intelligence-do-for-the-aviation-industry
Guest Contributor
14 May 2019
6 min read
Save for later

What can Artificial Intelligence do for the Aviation industry

Guest Contributor
14 May 2019
6 min read
The use of AI (Artificial Intelligence) technology in commercial aviation has brought some significant changes in the ways flights are being operated today. World’s leading airliner service providers are now using AI tools and technologies to deliver a more personalized traveling experience to their customers. From building AI-powered airport kiosks to using it for automating airline operations and security checking, AI will play even more critical roles in the aviation industry. Engineers have found AI can help the aviation industry with machine vision, machine learning, robotics, and natural language processing. Artificial intelligence has been found to be highly potent and various researches have shown how the use of artificial intelligence can bring significant changes in aviation. Few airlines now use artificial intelligence for predictive analytics, pattern recognition, auto scheduling, targeted advertising, and customer feedback analysis showing promising results for better flight experience. A recent report shows that aviation professionals are thinking to use artificial intelligence to monitor pilot voices for a hassle-free flying experience of the passengers. This technology is to bring huge changes in the world of aviation. Identification of the Passengers There’s no need to explain how modern inventions are contributing towards the betterment of mankind and AI can help in air transportation in numerous ways. Check-in before boarding is a vital task for an airline and they can simply take the help of artificial intelligence to do it easily, the same technology can be also used for identifying the passengers as well. American airline company Delta Airlines took the initiative in 2017. Their online check-in via Delta mobile app and ticketing kiosks have shown promising results and nowadays you can see many airlines taking similar features to the whole new level. The Transportation Security Administration of the United States has introduced new AI technology to identify potential threats at the John F. Kennedy, Los Angeles International Airport and Phoenix airports. Likewise, Hartsfield-Jackson Airport is planning to launch America’s first biometric terminal. Once installed, “the AI technology will make the process of passenger identification fast and easy for officials. Security scanners, biometric identification”, and machine learning are some of the AI technologies that will make a number of jobs easy for us. In this way, AI helps us predict disruption in airline services. Baggage Screening Baggage screening is another tedious but important task that needs to be done at the airport. However, AI has simplified the process of baggage screening. The American airlines once conducted a competition on app development on artificial intelligence and Team Avatar became the winner of the competition for making an app that would allow the users to determine the size of their baggage at the airport. Osaka Airport in Japan is planning to install the Syntech ONE 200, which is an AI technology developed to screen baggage for multiple passenger lanes. Such tools will not only automate the process of baggage screening but also help authorities detect illegal items effectively. Syntech ONE 200is compatible with the X-ray security system and it increases the probability of identification of potential threats. Assisting Customers AI can be used to assist customers in the airport and it can help a company reduce its operational costs and labor costs at the same time. Airlines companies are now using AI technologies to help their customers to resolve issues quickly by getting accurate information on future flights trips on their internet-enabled devices. More than 52% of airlines companies across the world have planned to install AI-based tools to improve their customer service functions in the next five years. Artificial Intelligence can answer various common questions of the customers, assisting them for check-in requests, the status of the flight and more. Nowadays artificial intelligence is also used in air cargo for different purposes such as revenue management, safety, and maintenance and it has shown impressive results till date. Maintenance Prediction Airlines companies are planning to implement AI technology to predict potential failures of maintenance on aircraft. Leading aircraft manufacturer Airbus is taking measures to improve the reliability of aircraft maintenance. They are using Skywise, a cloud-based data storing system. It helps the fleet to collect and record a huge amount of real-time data. The use of AI in the predictive maintenance analytics will pave the way for a systematic approach on how and when the aircraft maintenance should be done.  Nowadays you can see how top-rated airlines use artificial intelligence to make the process of maintenance easy and improve the user experience at the same time. Pitfalls of using AI in Aviation Despite being considered as a future of the aviation industry,  AI has some pitfalls. For instance, it takes time for implementation and it cannot be used as an ideal tool for customer service. The recent incident of Ethiopian Airlines Boeing 737 was an eye-opener for us and it clearly represents the drawback of AI technology in the aviation sector. The Boeing 737 crashed a few minutes after it took off from the capital of Ethiopia. The failure of the MCAC system was the key reasons behind the fatal accident. Also, AI is quite expensive; for example, if an airline company is planning to deploy a chatbot, it will have to invest more than $15,000. Thus, it would be a hard thing for small companies to invest for the same and this could create a barrier between small and big airlines in the future. As the market is becoming highly competitive, big airlines will conquer the market and the small airlines might face an existential threat due to this reason.   Conclusion The use of artificial intelligence in aviation has made many tasks easy for airlines and airport authorities across the world. From identifying passengers to screening the bags and providing fast and efficient customer care solutions. Unlike the software industry, the risks of real life harms are exponentially higher in the aviation industry. While other industries have started using this technology long back, the adoption of AI in aviation has been one of caution, and rightly so. As the aviation industry embraces the benefits of artificial intelligence and machine learning, it must also invest in putting in place checks and balances to identify, reduce and eliminate harmful consequences of AI, whether intended or otherwise.  As Silicon Valley reels in ethical dilemmas, the aviation industry will do well to learn from Silicon Valley while making a transition to a smart future. The aviation industry known for its rigorous safety measures and processes may, in fact, have a thing or two to teach Silicon Valley when it comes to designing, adopting and deploying AI systems into live systems that have high-risk profiles. Author Bio Maria Brown is Content Writer, Blogger and maintaining Social Media Optimization for 21Twelve Interactive. She believes in sharing her solid knowledge base with a focus on entrepreneurship and business. You can find her on Twitter.
Read more
  • 0
  • 0
  • 12800

article-image-what-is-interactive-machine-learning
Amey Varangaonkar
23 Jul 2018
4 min read
Save for later

What is interactive machine learning?

Amey Varangaonkar
23 Jul 2018
4 min read
Machine learning is a useful and effective tool to have when it comes to building prediction models or to build a useful data structure from an avalanche of data. Many ML algorithms are in use today for a variety of real-world use cases. Given a sample dataset, a machine learning model can give predictions with only certain accuracy, which largely depends on the quality of the training data fed to it. Is there a way to increase the prediction accuracy by somehow involving humans in the process? The answer is yes, and the solution is called as ‘Interactive Machine Learning’. Why we need interactive machine learning As we already discussed above, a model can give predictions only as good as the quality of the training data fed to it. If the quality of the training data is not good enough, the model might: Take more time to learn and then give accurate predictions Quality of predictions will be very poor This challenge can be overcome by involving humans in the machine learning process. By incorporating human feedback in the model training process, it can be trained faster and more efficiently to give more accurate predictions. In the widely adopted machine learning approaches, including supervised and unsupervised learning or even active learning for that matter, there is no way to include human feedback in the training process to improve the accuracy of predictions. In case of supervised learning, for example, the data is already pre-labelled and is used without any actual inputs from the human during the training process. For this reason alone, the concept of interactive machine learning is seen by many machine learning and AI experts as a breakthrough. How interactive machine learning works Machine Learning Researchers Teng Lee, James Johnson and Steve Cheng have suggested a novel way to include human inputs to improve the performance and predictions of the machine learning model. It has been called as the ‘Transparent Boosting Tree’ algorithm, which is a very interesting approach to combine the advantages of machine learning and human inputs in the final decision making process. The Transparent Boosting Tree, or TBT in short, is an algorithm that would visualize the model and the prediction details of each step in the machine learning process to the user, take his/her feedback, and incorporate it into the learning process. The ML model is in charge of updating the assigned weights to the inputs, and filtering the information shown to the user for his/her feedback. Once the feedback is received, it can be incorporated by the ML model as a part of the learning process, thus improving it. A basic flowchart of the interactive machine learning process is as shown: Interactive Machine Learning More in-depth information on how interactive machine learning works can be found in their paper. What can Interactive machine learning do for businesses With the rising popularity and applications of AI across all industry verticals, humans may have a key role to play in the learning process of an algorithm, apart from just coding it. While observing the algorithm’s own outputs or evaluations in the form of visualizations or plain predictions, humans can suggest way to to improve that prediction by giving feedback in the form of inputs such as labels, corrections or rankings. This helps the models in two ways: Increases the prediction accuracy Time taken for the algorithm to learn is shortened considerably Both the advantages can be invaluable to businesses, as they look to incorporate AI and machine learning in their processes, and look for faster and more accurate predictions. Interactive Machine Learning is still in its nascent stage and we can expect more developments in the domain to surface in the coming days. Once production-ready, it will undoubtedly be a game-changer. Read more Active Learning: An approach to training machine learning models efficiently Anatomy of an automated machine learning algorithm (AutoML) How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 12795
article-image-developers-guide-to-software-architecture-patterns
Sugandha Lahoti
06 Aug 2018
11 min read
Save for later

Developer's guide to Software architecture patterns

Sugandha Lahoti
06 Aug 2018
11 min read
As we all know, patterns are a kind of simplified and smarter solution for a repetitive concern or recurring challenge in any field of importance. In the field of software engineering, there are primarily many designs, integration, and architecture patterns. In this article, we will cover the need for software patterns and describe the most prominent and dominant software architecture patterns. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. Why software patterns? There is a bevy of noteworthy transformations happening in the IT space, especially in software engineering. The complexity of recent software solutions is continuously going up due to the continued evolution of the business expectations. With complex software, not only does the software development activity become very difficult, but also the software maintenance and enhancement tasks become tedious and time-consuming. Software patterns come as a soothing factor for software architects, developers, and operators. Types of software patterns Several newer types of patterns are emerging in order to cater to different demands. This section throws some light on these. An architecture pattern expresses a fundamental structural organization or schema for complex systems. It provides a set of predefined subsystems, specifies their unique responsibilities, and includes the decision-enabling rules and guidelines for organizing the relationships between them. The architecture pattern for a software system illustrates the macro-level structure for the whole software solution. A design pattern provides a scheme for refining the subsystems or components of a system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context. The design pattern for a software system prescribes the ways and means of building the software components. There are other patterns, too. The dawn of the big data era mandates for distributed computing. The monolithic and massive nature of enterprise-scale applications demands microservices-centric applications. Here, application services need to be found and integrated in order to give an integrated result and view. Thus, there are integration-enabled patterns. Similarly, there are patterns for simplifying software deployment and delivery. Other complex actions are being addressed through the smart leverage of simple as well as composite patterns. Software architecture patterns Let's look at some of the prominent and dominant software architecture patterns. Object-oriented architecture (OOA) Objects are the fundamental and foundational building blocks for all kinds of software applications. Therefore, the object-oriented architectural style has become the dominant one for producing object-oriented software applications. Ultimately, a software system is viewed as a dynamic collection of cooperating objects, instead of a set of routines or procedural instructions. We know that there are proven object-oriented programming methods and enabling languages, such as C++, Java, and so on. The properties of inheritance, polymorphism, encapsulation, and composition being provided by OOA come in handy in producing highly modular (highly cohesive and loosely coupled), usable and reusable software applications. The object-oriented style is suitable if we want to encapsulate logic and data together in reusable components. Also, the complex business logic that requires abstraction and dynamic behavior can effectively use this OOA. Component-based assembly (CBD) architecture Monolithic and massive applications can be partitioned into multiple interactive and smaller components. When components are found, bound, and composed, we get the full-fledged software applications.  CBA does not focus on issues such as communication protocols and shared states. Components are reusable, replaceable, substitutable, extensible, independent, and so on. Design patterns such as the dependency injection (DI) pattern or the service locator pattern can be used to manage dependencies between components and promote loose coupling and reuse. Such patterns are often used to build composite applications that combine and reuse components across multiple applications. Aspect-oriented programming (AOP) aspects are another popular application building block. By deft maneuvering of this unit of development, different applications can be built and deployed. The AOP style aims to increase modularity by allowing the separation of cross-cutting concerns. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. Agent-oriented software engineering (AOSE) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to the proven object-oriented programming, which has objects (providing methods with variable parameters) at its core, agent-oriented programming has externally specified agents with interfaces and messaging capabilities at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving agents, in a way specific to its class of agents. Domain-driven design (DDD) architecture Domain-driven design is an object-oriented approach to designing software based on the business domain, its elements and behaviors, and the relationships between them. It aims to enable software systems that are a correct realization of the underlying business domain by defining a domain model expressed in the language of business domain experts. The domain model can be viewed as a framework from which solutions can then be readied and rationalized. DDD is good if we have a complex domain and we wish to improve communication and understanding within the development team. DDD can also be an ideal approach if we have large and complex enterprise data scenarios that are difficult to manage using the existing techniques. Client/server architecture This pattern segregates the system into two main applications, where the client makes requests to the server. In many cases, the server is a database with application logic represented as stored procedures. This pattern helps to design distributed systems that involve a client system and a server system and a connecting network. The main benefits of the client/server architecture pattern are: Higher security: All data gets stored on the server, which generally offers a greater control of security than client machines. Centralized data access: Because data is stored only on the server, access and updates to the data are far easier to administer than in other architectural styles. Ease of maintenance: The server system can be a single machine or a cluster of multiple machines. The server application and the database can be made to run on a single machine or replicated across multiple machines to ensure easy scalability and high availability. However, the traditional two-tier client/server architecture pattern has numerous disadvantages. Firstly, the tendency of keeping both application and data on a server can negatively impact system extensibility and scalability. The server can be a single point of failure. The reliability is the main worry here. To address these issues, the client-server architecture has evolved into the more general three-tier (or N-tier) architecture. This multi-tier architecture not only surmounts the issues just mentioned but also brings forth a set of new benefits. Multi-tier distributed computing architecture The two-tier architecture is neither flexible nor extensible. Hence, multi-tier distributed computing architecture has attracted a lot of attention. The application components can be deployed in multiple machines (these can be co-located and geographically distributed). Application components can be integrated through messages or remote procedure calls (RPCs), remote method invocations (RMIs), common object request broker architecture (CORBA), enterprise Java beans (EJBs), and so on. The distributed deployment of application services ensures high availability, scalability, manageability, and so on. Web, cloud, mobile, and other customer-facing applications are deployed using this architecture. Thus, based on the business requirements and the application complexity, IT teams can choose the simple two-tier client/server architecture or the advanced N-tier distributed architecture to deploy their applications. These patterns are for simplifying the deployment and delivery of software applications to their subscribers and users. Layered/tiered architecture This pattern is an improvement over the client/server architecture pattern. This is the most commonly used architectural pattern. Typically, an enterprise software application comprises three or more layers: presentation/user interface layer, business logic layer, and data persistence layer. The presentation layer is primarily usded for user interface applications (thick clients) or web browsers (thin clients). With the fast proliferation of mobile devices, mobile browsers are also being attached to the presentation layer. Such tiered segregation comes in handy in managing and maintaining each layer accordingly. The power of plug-in and play gets realized with this approach. Additional layers can be fit in as needed. There are model view controller (MVC) pattern-compliant frameworks hugely simplifying enterprise-grade and web-scale applications. MVC is a web application architecture pattern. The main advantage of the layered architecture is the separation of concerns. That is, each layer can focus solely on its role and responsibility. The layered and tiered pattern makes the application: Maintainable Testable Easy to assign specific and separate roles Easy to update and enhance layers separately This architecture pattern is good for developing web-scale, production-grade, and cloud-hosted applications quickly and in a risk-free fashion. When there are business and technology changes, this layered architecture comes in handy in embedding newer things in order to meet varying business requirements. Event-driven architecture (EDA) The world is eventually becoming event-driven. That is, applications have to be sensitive and responsive proactively, pre-emptively, and precisely. Whenever there is an event happening, applications have to receive the event information and plunge into the necessary activities immediately. The request and reply notion paves the way for the fire and forgets tenet. The communication becomes asynchronous. There is no need for the participating applications to be available online all the time. EDA is typically based on an asynchronous message-driven communication model to propagate information throughout an enterprise. It supports a more natural alignment with an organization's operational model by describing business activities as series of events. EDA does not bind functionally disparate systems and teams into the same centralized management model. EDA ultimately leads to highly decoupled systems. The common issues being introduced by system dependencies are getting eliminated through the adoption of the proven and potential EDA. We have seen various forms of events used in different areas. There are business and technical events. Systems update their status and condition emitting events to be captured and subjected to a variety of investigations in order to precisely understand the prevailing situations. The submission of web forms and clicking on some hypertexts generate events to be captured. Incremental database synchronization mechanisms, RFID readings, email messages, short message service (SMS), instant messaging, and so on are events not to be taken lightly. There are event processing engines, message-oriented middleware (MoM) solutions such as message queues and brokers to collect and stock event data and messages. Millions of events can be collected, parsed, and delivered through multiple topics through these MoM solutions. As event sources/producers publish notifications, event receivers can choose to listen to or filter out specific events and make proactive decisions in real-time on what to do next. EDA style is built on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution. In an EDA environment, information can be propagated to all the services and applications in real-time. The EDA pattern enables highly reactive enterprise applications. Real-time analytics is the new normal with the surging popularity of the EDA pattern. Service-oriented architecture (SOA) With the arrival of service paradigms, software packages and libraries are being developed as a collection of services. Services are capable of running independently of the underlying technology. Also, services can be implemented using any programming and script languages. Services are self-defined, autonomous, and interoperable, publicly discoverable, assessable, accessible, reusable, and compostable. Services interact with one another through messaging. There are service providers/developers and consumers/clients. Every service has two parts: the interface and the implementation. The interface is the single point of contact for requesting services. Interfaces give the required separation between services. All kinds of deficiencies and differences of service implementation get hidden by the service interface. Precisely speaking, SOA enables application functionality to be provided as a set of services, and the creation of personal as well as professional applications that make use of software services. In short, SOA is for service-enablement and service-based integration of monolithic and massive applications. The complexity of enterprise process/application integration gets moderated through the smart leverage of the service paradigm. To summarize, we detailed the prominent and dominant software architecture patterns and how they are used for producing and running any kind of enterprise-class and production-grade software applications. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, grab the book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 12774

article-image-6-common-challenges-faced-by-android-app-developers
Guest Contributor
21 Sep 2018
5 min read
Save for later

6 common challenges faced by Android App developers

Guest Contributor
21 Sep 2018
5 min read
The primary target for businesses while working on mobile apps is the Android platform, thanks to the massive market share the mobile operating system holds. It’s popularity can be attributed to the fact that it is open source and is regular updated with new enhancements and features. Android devices generally tend to differ based on the mobile hardware features even when powered by the same version of the Android OS. This is why it is essential that when developing apps for Android, developers create mobile apps capable of targeting a diverse range of mobile devices running on different versions of Android OS. During the various stages of planning, developing and testing, developers need to focus comprehensively on the apps functionality, accessibility, usability, performance, and security so that users can be engaged despite their choice of device. Also, they also need to look for ways to make the apps deliver a more personalized user experience across the various devices an operating system. Furthermore, developers need to understand and find solutions to the common challenges involved in android app development. Common Challenges Android App Developers Face 1. Hardware Features The Android OS is unlike any other mobile operating system. For one thing, it is an open source system. Alphabet gives manufacturers the leeway to customize the operating system to their specific needs. Also, there are no regulations on the devices being released by the different manufacturers. As a result, you can find various Android devices with different hardware features running on the same Android version. Two smartphones running on Android latest ver, for example, may have different screen resolutions, camera, screen size, and other hardware structures. During android app development, developers need to account for all of this to ensure the application delivers a personalized experience to each user. 2. Lack of Uniform User Interface Design Rules Since Google is yet to release any standard UI (user interface) design rules or process for mobile app developers, most developers don’t follow any standard UI development rules or procedure. Because developers are creating custom UI interfaces in their preferred way, a lot of apps tend to function or look different across different devices. This diversity and incompatibility of the UI usually affects the user experience that the Android app directly delivers. Smart developers prefer to go for a responsive layout that’ll keep the UI consistent across different devices. Moreover, developers need to test the UI of the app extensively by combining emulators and real mobile devices. Designing a UI that makes the app deliver the same user experience across varying Android devices is one of the more daunting challenges developers face. 3. API Incompatibility A lot of developers make use of third-party APIs to enhance the functionality and interoperability of a mobile device. Unfortunately, not all third-party APIs available for Android app development are of high quality.. Some APIs were created for a particular Android version and will not work on devices running on a different version of the operating system. Developers usually have to come up with ways to make a single API work on all Android versions, a task they often find to be very challenging. 4. Security Flaws As previously mentioned, Android is an open source software, and because of that, manufacturers find it easy to customize Android to their desired specifications. However, this openness and the massive market size makes Android a frequent target for security attacks. There have been several instances where the security of millions of Android mobile devices have been affected by security flaws and bugs like mRST, Stagefright, FakeID, ‘Certifi-gate,’ TowelRoot and Installer Hijacking. Developers need to include robust security features in their applications and utilize the latest encryption mechanisms to keep user information secure and out of the hands of hackers. 5. Search Engine Visibility The latest data from Statista shows that Google Play Store contains a higher number of mobile apps. Additionally, a large number of Android users prefer free apps than paid apps which is why developers need to promote their mobile applications to increase their download numbers and employ application monetization options. The best way to promote the app to reach their target audience is to use comprehensive digital marketing strategies. Most developers make use of digital marketing professionals to promote their apps aggressively. 6. Patent Issues Google doesn’t implement any guidelines for the evaluation of the quality of new apps that are getting submitted to the Play Store. This lack of a quality assessment guideline causes a lot of patent-related issues for developers. Some developers, to avoid patent issues, have to modify and redesign their apps in the future. As per my personal experience, I have tried to cover general challenges faced by Android app developers. I’m sure keeping wary of these challenges would help developers to build successful apps in the most hassle free way. Author Bio Harnil Oza is the CEO of Hyperlink InfoSystem, one of the leading app development companies in New York, USA and India who deliver mobile solutions mainly on Android and iOS platform. He regularly contributes his knowledge on leading blogging sites. LEGO launches BrickHeadz Builder AR, a new and free Android app to bring bricks and toys to life How Android app developers can convert iPhone apps How to Secure and Deploy an Android App
Read more
  • 0
  • 0
  • 12613