Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Artificial Intelligence

170 Articles
article-image-the-rise-of-machine-learning-in-the-investment-industry
Natasha Mathur
15 Feb 2019
13 min read
Save for later

The rise of machine learning in the investment industry

Natasha Mathur
15 Feb 2019
13 min read
The investment industry has evolved dramatically over the last several decades and continues to do so amid increased competition, technological advances, and a challenging economic environment. In this article, we will review several key trends that have shaped the investment environment in general, and the context for algorithmic trading more specifically. This article is an excerpt taken from the book 'Hands on Machine Learning for algorithmic trading' written by Stefan Jansen. The book explores the strategic perspective, conceptual understanding, and practical tools to add value from applying ML to the trading and investment process. The trends that have propelled algorithmic trading and ML to current prominence include: Changes in the market microstructure, such as the spread of electronic trading and the integration of markets across asset classes and geographies The development of investment strategies framed in terms of risk-factor exposure, as opposed to asset classes The revolutions in computing power, data-generation and management, and analytic methods The outperformance of the pioneers in algorithmic traders relative to human, discretionary investors In addition, the financial crises of 2001 and 2008 have affected how investors approach diversification and risk management and have given rise to low-cost passive investment vehicles in the form of exchange-traded funds (ETFs). Amid low yield and low volatility after the 2008 crisis, cost-conscious investors shifted $2 trillion from actively-managed mutual funds into passively managed ETFs. Competitive pressure is also reflected in lower hedge fund fees that dropped from the traditional 2% annual management fee and 20% take of profits to an average of 1.48% and 17.4%, respectively, in 2017. Let's have a look at how ML has come to play a strategic role in algorithmic trading. Factor investing and smart beta funds The return provided by an asset is a function of the uncertainty or risk associated with financial investment. An equity investment implies, for example, assuming a company's business risk, and a bond investment implies assuming default risk. To the extent that specific risk characteristics predict returns, identifying and forecasting the behavior of these risk factors becomes a primary focus when designing an investment strategy. It yields valuable trading signals and is the key to superior active-management results. The industry's understanding of risk factors has evolved very substantially over time and has impacted how ML is used for algorithmic trading. Modern Portfolio Theory (MPT) introduced the distinction between idiosyncratic and systematic sources of risk for a given asset. Idiosyncratic risk can be eliminated through diversification, but systematic risk cannot. In the early 1960s, the Capital Asset Pricing Model (CAPM) identified a single factor driving all asset returns: the return on the market portfolio in excess of T-bills. The market portfolio consisted of all tradable securities, weighted by their market value. The systematic exposure of an asset to the market is measured by beta, which is the correlation between the returns of the asset and the market portfolio. The recognition that the risk of an asset does not depend on the asset in isolation, but rather how it moves relative to other assets, and the market as a whole, was a major conceptual breakthrough. In other words, assets do not earn a risk premium because of their specific, idiosyncratic characteristics, but because of their exposure to underlying factor risks. However, a large body of academic literature and long investing experience have disproved the CAPM prediction that asset risk premiums depend only on their exposure to a single factor measured by the asset's beta. Instead, numerous additional risk factors have since been discovered. A factor is a quantifiable signal, attribute, or any variable that has historically correlated with future stock returns and is expected to remain correlated in future. These risk factors were labeled anomalies since they contradicted the Efficient Market Hypothesis (EMH), which sustained that market equilibrium would always price securities according to the CAPM so that no other factors should have predictive power. The economic theory behind factors can be either rational, where factor risk premiums compensate for low returns during bad times, or behavioral, where agents fail to arbitrage away excess returns. Well-known anomalies include the value, size, and momentum effects that help predict returns while controlling for the CAPM market factor. The size effect rests on small firms systematically outperforming large firms, discovered by Banz (1981) and Reinganum (1981). The value effect (Basu 1982) states that firms with low valuation metrics outperform. It suggests that firms with low price multiples, such as the price-to-earnings or the price-to-book ratios, perform better than their more expensive peers (as suggested by the inventors of value investing, Benjamin Graham and David Dodd, and popularized by Warren Buffet). The momentum effect, discovered in the late 1980s by, among others, Clifford Asness, the founding partner of AQR, states that stocks with good momentum, in terms of recent 6-12 month returns, have higher returns going forward than poor momentum stocks with similar market risk. Researchers also found that value and momentum factors explain returns for stocks outside the US, as well as for other asset classes, such as bonds, currencies, and commodities, and additional risk factors. In fixed income, the value strategy is called riding the yield curve and is a form of the duration premium. In commodities, it is called the roll return, with a positive return for an upward-sloping futures curve and a negative return otherwise. In foreign exchange, the value strategy is called carry. There is also an illiquidity premium. Securities that are more illiquid trade at low prices and have high average excess returns, relative to their more liquid counterparts. Bonds with higher default risk tend to have higher returns on average, reflecting a credit risk premium. Since investors are willing to pay for insurance against high volatility when returns tend to crash, sellers of volatility protection in options markets tend to earn high returns. Multifactor models define risks in broader and more diverse terms than just the market portfolio. In 1976, Stephen Ross proposed arbitrage pricing theory, which asserted that investors are compensated for multiple systematic sources of risk that cannot be diversified away. The three most important macro factors are growth, inflation, and volatility, in addition to productivity, demographic, and political risk. In 1992, Eugene Fama and Kenneth French combined the equity risk factors' size and value with a market factor into a single model that better explained cross-sectional stock returns. They later added a model that also included bond risk factors to simultaneously explain returns for both asset classes. A particularly attractive aspect of risk factors is their low or negative correlation. Value and momentum risk factors, for instance, are negatively correlated, reducing the risk and increasing risk-adjusted returns above and beyond the benefit implied by the risk factors. Furthermore, using leverage and long-short strategies, factor strategies can be combined into market-neutral approaches. The combination of long positions in securities exposed to positive risks with underweight or short positions in the securities exposed to negative risks allows for the collection of dynamic risk premiums. As a result, the factors that explained returns above and beyond the CAPM were incorporated into investment styles that tilt portfolios in favor of one or more factors, and assets began to migrate into factor-based portfolios. The 2008 financial crisis underlined how asset-class labels could be highly misleading and create a false sense of diversification when investors do not look at the underlying factor risks, as asset classes came crashing down together. Over the past several decades, quantitative factor investing has evolved from a simple approach based on two or three styles to multifactor smart or exotic beta products. Smart beta funds have crossed $1 trillion AUM in 2017, testifying to the popularity of the hybrid investment strategy that combines active and passive management. Smart beta funds take a passive strategy but modify it according to one or more factors, such as cheaper stocks or screening them according to dividend payouts, to generate better returns. This growth has coincided with increasing criticism of the high fees charged by traditional active managers as well as heightened scrutiny of their performance. The ongoing discovery and successful forecasting of risk factors that, either individually or in combination with other risk factors, significantly impact future asset returns across asset classes is a key driver of the surge in ML in the investment industry. Algorithmic pioneers outperform humans at scale The track record and growth of Assets Under Management (AUM) of firms that spearheaded algorithmic trading has played a key role in generating investor interest and subsequent industry efforts to replicate their success. Systematic funds differ from HFT in that trades may be held significantly longer while seeking to exploit arbitrage opportunities as opposed to advantages from sheer speed. Systematic strategies that mostly or exclusively rely on algorithmic decision-making were most famously introduced by mathematician James Simons who founded Renaissance Technologies in 1982 and built it into the premier quant firm. Its secretive Medallion Fund, which is closed to outsiders, has earned an estimated annualized return of 35% since 1982. DE Shaw, Citadel, and Two Sigma, three of the most prominent quantitative hedge funds that use systematic strategies based on algorithms, rose to the all-time top-20 performers for the first time in 2017 in terms of total dollars earned for investors, after fees, and since inception. DE Shaw, founded in 1988 with $47 billion AUM in 2018 joined the list at number 3. Citadel started in 1990 by Kenneth Griffin, manages $29 billion and ranks 5, and Two Sigma started only in 2001 by DE Shaw alumni John Overdeck and David Siegel, has grown from $8 billion AUM in 2011 to $52 billion in 2018. Bridgewater started in 1975 with over $150 billion AUM, continues to lead due to its Pure Alpha Fund that also incorporates systematic strategies. Similarly, on the Institutional Investors 2017 Hedge Fund 100 list, five of the top six firms rely largely or completely on computers and trading algorithms to make investment decisions—and all of them have been growing their assets in an otherwise challenging environment. Several quantitatively-focused firms climbed several ranks and in some cases grew their assets by double-digit percentages. Number 2-ranked Applied Quantitative Research (AQR) grew its hedge fund assets 48% in 2017 to $69.7 billion and managed $187.6  billion firm-wide. Among all hedge funds, ranked by compounded performance over the last three years, the quant-based funds run by Renaissance Technologies achieved ranks 6 and 24, Two Sigma rank 11, D.E. Shaw no 18 and 32, and Citadel ranks 30 and 37. Beyond the top performers, algorithmic strategies have worked well in the last several years. In the past five years, quant-focused hedge funds gained about 5.1% per year while the average hedge fund rose 4.3% per year in the same period. ML driven funds attract $1 trillion AUM The familiar three revolutions in computing power, data, and ML methods have made the adoption of systematic, data-driven strategies not only more compelling and cost-effective but a key source of competitive advantage. As a result, algorithmic approaches are not only finding wider application in the hedge-fund industry that pioneered these strategies but across a broader range of asset managers and even passively-managed vehicles such as ETFs. In particular, predictive analytics using machine learning and algorithmic automation play an increasingly prominent role in all steps of the investment process across asset classes, from idea-generation and research to strategy formulation and portfolio construction, trade execution, and risk management. Estimates of industry size vary because there is no objective definition of a quantitative or algorithmic fund, and many traditional hedge funds or even mutual funds and ETFs are introducing computer-driven strategies or integrating them into a discretionary environment in a human-plus-machine approach. Morgan Stanley estimated in 2017 that algorithmic strategies have grown at 15% per year over the past six years and control about $1.5 trillion between hedge funds, mutual funds, and smart beta ETFs. Other reports suggest the quantitative hedge fund industry was about to exceed $1 trillion AUM, nearly doubling its size since 2010 amid outflows from traditional hedge funds. In contrast, total hedge fund industry capital hit $3.21 trillion according to the latest global Hedge Fund Research report. The market research firm Preqin estimates that almost 1,500 hedge funds make a majority of their trades with help from computer models. Quantitative hedge funds are now responsible for 27% of all US stock trades by investors, up from 14% in 2013. But many use data scientists—or quants—which, in turn, use machines to build large statistical models (WSJ). In recent years, however, funds have moved toward true ML, where artificially-intelligent systems can analyze large amounts of data at speed and improve themselves through such analyses. Recent examples include Rebellion Research, Sentient, and Aidyia, which rely on evolutionary algorithms and deep learning to devise fully-automatic Artificial Intelligence (AI)-driven investment platforms. From the core hedge fund industry, the adoption of algorithmic strategies has spread to mutual funds and even passively-managed exchange-traded funds in the form of smart beta funds, and to discretionary funds in the form of quantamental approaches. The emergence of quantamental funds Two distinct approaches have evolved in active investment management: systematic (or quant) and discretionary investing. Systematic approaches rely on algorithms for a repeatable and data-driven approach to identify investment opportunities across many securities; in contrast, a discretionary approach involves an in-depth analysis of a smaller number of securities. These two approaches are becoming more similar to fundamental managers take more data-science-driven approaches. Even fundamental traders now arm themselves with quantitative techniques, accounting for $55 billion of systematic assets, according to Barclays. Agnostic to specific companies, quantitative funds trade patterns and dynamics across a wide swath of securities. Quants now account for about 17% of total hedge fund assets, data compiled by Barclays shows. Point72 Asset Management, with $12 billion in assets, has been shifting about half of its portfolio managers to a man-plus-machine approach. Point72 is also investing tens of millions of dollars into a group that analyzes large amounts of alternative data and passes the results on to traders. Investments in strategic capabilities Rising investments in related capabilities—technology, data and, most importantly, skilled humans—highlight how significant algorithmic trading using ML has become for competitive advantage, especially in light of the rising popularity of passive, indexed investment vehicles, such as ETFs, since the 2008 financial crisis. Morgan Stanley noted that only 23% of its quant clients say they are not considering using or not already using ML, down from 44% in 2016. Guggenheim Partners LLC built what it calls a supercomputing cluster for $1 million at the Lawrence Berkeley National Laboratory in California to help crunch numbers for Guggenheim's quant investment funds. Electricity for the computers costs another $1 million a year. AQR is a quantitative investment group that relies on academic research to identify and systematically trade factors that have, over time, proven to beat the broader market. The firm used to eschew the purely computer-powered strategies of quant peers such as Renaissance Technologies or DE Shaw. More recently, however, AQR has begun to seek profitable patterns in markets using ML to parse through novel datasets, such as satellite pictures of shadows cast by oil wells and tankers. The leading firm BlackRock, with over $5 trillion AUM, also bets on algorithms to beat discretionary fund managers by heavily investing in SAE, a systematic trading firm it acquired during the financial crisis. Franklin Templeton bought Random Forest Capital, a debt-focused, data-led investment company for an undisclosed amount, hoping that its technology can support the wider asset manager. We looked at how ML plays a role in different industry trends around algorithmic trading. If you want to learn more about design and execution of algorithmic trading strategies, and use cases of ML in algorithmic trading, be sure to check out the book 'Hands on Machine Learning for algorithmic trading'. Using machine learning for phishing domain detection [Tutorial] Anatomy of an automated machine learning algorithm (AutoML) 10 machine learning algorithms every engineer needs to know
Read more
  • 0
  • 0
  • 6562

article-image-do-you-need-artificial-intelligence-and-machine-learning-expertise-in-house
Guest Contributor
22 Jan 2019
7 min read
Save for later

Do you need artificial intelligence and machine learning expertise in house?

Guest Contributor
22 Jan 2019
7 min read
Developing artificial intelligence expertise is a challenge. There’s a huge global demand for practitioners with the right skills and knowledge and a lack of people who can actually deliver what’s needed. It’s difficult because many of the most talented engineers are being hired by the planet’s leading tech companies on salaries that simply aren’t realistic for many organizations. Ultimately, you have two options: form an in-house artificial intelligence development team or choose an external software development team or consultant with proven artificial intelligence expertise. Let’s take a closer look at each strategy. Building an in-house AI development team If you want to develop your own AI capabilities, you will need to bring in strong technical skills in machine learning. Since recruiting experts in this area isn’t an easy task, upskilling your current in-house development team may be an option. However, you will need to be confident that your team has the knowledge and attitude to develop those skills. Of course, it’s also important to remember that a team building artificial intelligence is comprised of a range of skills and areas of expertise. If you can see how your team could evolve in that way, you’re halfway to solving your problem. AI experts you need for building a project  Big Data engineers: Before analyzing data, you need you collect, organize, and process it. AI is usually based on big data, so you need the engineers who have experience working with structured and unstructured data, and can build a secure data platform. They should have sound knowledge of Hadoop, Spark, R, Hive, Pig, and other Big Data technologies.  Data scientists: Data scientists are a vital part of your AI team. They work their magic with data, building the models, investigating, analyzing, and interpreting it. They leverage data mining and other techniques to surface hidden insights and solve business problems. NLP specialists: A lot of AI projects involve Natural Language Processing, so you will probably need NLP specialists. NLP allows computers to understand and translate human language serving as a bridge between human communication and machine interpretation. Machine learning engineers: These specialists utilize machine learning libraries, deploying ML solutions into production. They take care of the maintainability and scalability of data science code. Computer vision engineers: They specialize in imagery recognition, correlating image to a particular metric instead of correlating metrics to metrics. For example, computer vision is used for modeling objects or environments (medical image analysis), identification tasks (a species identification system), and processes controlling (industrial robots).  Speech recognition engineers: You will need these experts if you want to build your speech recognition system. Speech recognition can be very useful in telecommunication services, in-car systems, medical documentation, and education. For instance, it is used in language learning for practicing pronunciation. Partnering with an AI solution provider If you realize that recruiting and building your own in-house AI team is too difficult and expensive, you can engage with an external AI provider. Such an approach helps companies keep the focus on their core expertise and avoid the headache of recruiting the engineers and setting up the team. Also, it allows them to kick off the project much faster and thus gain a competitive advantage. Factors to consider when choosing an artificial intelligence solution provider AI engineering experience Due to the huge popularity of AI these days, many companies claim to be professional AI development providers without practical experience. Hence it’s extremely important to do extensive research. Firstly, you should study the portfolio and case studies of the company. Find out which AI, machine learning or data science projects your potential vendor worked on and what kind of artificial intelligence solutions the company has delivered. For instance, you may check out these European AI development companies and the products they developed. Also, make sure a provider has experience in the types of machine learning algorithms (supervised, unsupervised, and reinforcement), data structures and algorithms, computer vision, NLP, etc that are relevant to your project needs. Expertise in AI technologies Artificial Intelligence covers a multitude of different technologies, frameworks, and tools. Make sure your external engineering team consists of professional data scientists and data engineers who can solve your business problems. Building the AI team and selecting the necessary skill set might be challenging for businesses that have no internal AI expertise. Therefore, ask a vendor to provide tech experts or delivery managers who will advise you on the team composition and help you hire the right people. Capacities to scale a team When choosing a team, you should consider not only your primary needs but also the potential growth of your business. If you expect your company to scale up, you’ll need more engineering capacities. Therefore, take into account your partner’s ability to ramp up the team in the future. Also, consider factors such as the vendor’s employer image and retention rate since your ability to attract top AI talent and keep them on your project will largely depend on it. Suitable cooperation model It is essential to choose the AI company with a cooperation model that fits your business requirements. The most popular cooperation models are Fixed Price, Time and Material, and Dedicated Development Team. Within the fixed price model all the requirements and the scope of work are set from the start, and you as a customer need to have them described to the smallest detail as it will be extremely difficult to make change requests during the project. However, it is not the best option for AI projects since they involve a lot of R&D and it is difficult to define everything at the initial stage. Time and material model is the best for small projects when you don’t need the specialists to be fully dedicated to your project. This is not the best choice for AI development as the hourly rates of AI engineers are extremely high and the whole project would cost you a fortune with this type of contract. In order to add more flexibility yet keep control over the project budget, it is better to choose a dedicated development team model or staff augmentation. It will allow you to change the requirements when needed and have control over your team. With this type of engagement, you will be able to keep the knowledge within your team and develop your AI expertise as developers will work exclusively for you. Conclusion If you have to deal with the challenge of building AI expertise in your company, there are two possible ways to go. First off, you can attract local AI talent and build the expertise in-house. Then you have to assemble the team of data scientists, data engineers, and other specialists depending on your needs. However, developing AI expertise in-house is always time- and cost-consuming taking into account the shortage of well-qualified machine learning specialists and superlative salary expectations. The other option is to partner with an AI development vendor and hire an extended team of engineers. In this case, you have to consider a number of factors such as the company’s experience in delivering AI solutions, the ability to allocate the necessary resources, the technological expertise, and its capabilities to satisfy your business requirements. Author Bio Romana Gnatyk is Content Marketing Manager at N-IX passionate about software development. Writing insightful content on various IT topics, including software product development, mobile app development, artificial intelligence, the blockchain, and different technologies. Researchers introduce a machine learning model where the learning cannot be proved “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] Apple ups it’s AI game; promotes John Giannandrea as SVP of machine learning
Read more
  • 0
  • 0
  • 4100

article-image-how-are-mobile-apps-transforming-the-healthcare-industry
Guest Contributor
15 Jan 2019
5 min read
Save for later

How are Mobile apps transforming the healthcare industry?

Guest Contributor
15 Jan 2019
5 min read
Mobile App Development has taken over and completely re-written the healthcare industry. According to Healthcare Mobility Solutions reports, the Mobile healthcare application market is expected to be worth more than $84 million by the year 2020. These mobile applications are not just limited to use by patients but are also massively used by doctors and nurses. As technology evolves, it simultaneously opens up the possibility of being used in multiple ways. Similar has been the journey of healthcare mobile app development that has originated from the latest trends in technology and has made its way to being an industry in itself. The technological trends that have helped build mobile apps for the healthcare industry are Blockchain You probably know blockchain technology, thanks to all the cryptocurrency rage in recent years. The blockchain is basically a peer-to-peer database that keeps a verified record of all transactions, or any other information that one needs to track and have it accessible to a large community. The healthcare industry can use a technology that allows it to record the medical history of patients, and store it electronically, in an encrypted form, that cannot be altered or hacked into. Blockchain succeeds where a lot of health applications fail, in the secure retention of patient data. The Internet of Things The Internet of Things (IoT) is all about connectivity. It is a way of interconnecting electronic devices, software, applications, etc., to ensure easy access and management across platforms. The loT will assist medical professionals in gaining access to valuable patient information so that doctors can monitor the progress of their patients. This makes treatment of the patient easier, and more closely monitored, as doctors can access the patient’s current profile anywhere and suggest treatment, medicine, and dosages. Augmented Reality From the video gaming industry, Augmented Reality has made its way to the medical sector. AR refers to the creation of an interactive experience of a real-world environment through superimposition of computer-generated perceptual information. AR is increasingly used to develop mobile applications that can be used by doctors and surgeons as a training experience. It stimulates a real-world experience of diagnosis and surgery, and by doing so, enhances the knowledge and its practical application that all doctors must necessarily possess. This form of training is not limited in nature, and can, therefore, simultaneously train a large number of medical practitioners. Big Data Analytics Big Data has the potential to provide comprehensive statistical information, only accessed and processed through sophisticated software. Big Data Analytics becomes extremely useful when it comes to managing the hospital’s resources and records in an efficient manner. Aside from this, it is used in the development of mobile applications that store all patient data, thus again, eliminating the need for excessive paperwork. This allows medical professionals to focus more on attending and treating the patients, rather than managing database. These technological trends have led to the development of a diverse variety of mobile applications to be used for multiple purposes in the healthcare industry. Listed below are the benefits of the mobile apps deploying these technological trends, for the professionals and the patients alike. Telemedicine Mobile applications can potentially play a crucial role in making medical services available to the masses. An example is an on-call physician on telemedicine duty. A mobile application will allow the physician to be available for a patient consult without having to operate via  PC. This will make the doctors more accessible and will bring quality treatment to the patients quickly. Enhanced Patient Engagement There are mobile applications that place all patient data – from past medical history to performance metrics, patient feedback, changes in the treatment patterns and schedules, at the push of a button on the smartphone application for the medical professional to consider and make a decision on the go. Since all data is recorded in real-time, it makes it easy for doctors to change shifts without having to explain to the next doctor the condition of the patient in person. The mobile application has all the data the supervisors or nurses need. Easy Access to Medical Facilities There are a number of mobile applications that allow patients to search for medical professionals in their area, read their reviews and feedback by other patients, and then make an online appointment if they are satisfied with the information that they find. Apart from these, they can also download and store their medical lab reports, and order medicines online at affordable prices. Easy Payment of Bills Like in every other sector, mobile applications in healthcare have made monetary transactions extremely easy. Patients or their family members, no longer need to spend hours waiting in the line to pay the bills. They can instantly pick a payment plan and pay bills immediately or add reminders to be notified when a bill is due. Therefore, it can be safely said that the revolution that the healthcare industry is undergoing and has worked in the favor of all the parties involved – Medical Professionals, Patients, Hospital Management and the Mobile App Developers. Author's Bio Ritesh Patil is the co-founder of Mobisoft Infotech that helps startups and enterprises in mobile technology. He’s an avid blogger and writes on mobile application development. He has developed innovative mobile applications across various fields such as Finance, Insurance, Health, Entertainment, Productivity, Social Causes, Education and many more and has bagged numerous awards for the same. Social Media – Twitter, LinkedIn Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0 7 Popular Applications of Artificial Intelligence in Healthcare
Read more
  • 0
  • 0
  • 8283

article-image-why-retailers-need-to-prioritize-ecommerce-automation-in-2019
Guest Contributor
14 Jan 2019
6 min read
Save for later

Why Retailers need to prioritize eCommerce Automation in 2019

Guest Contributor
14 Jan 2019
6 min read
The retail giant Amazon plans to reinvent in-store shopping in much the same way it revolutionized online shopping. Amazon’s cashierless stores (3000 of which you can expect by 2021) give a glimpse into what the future of eCommerce could look like with automation. eCommerce automation involves combining the right eCommerce software with the right processes and the right people to streamline and automate the order lifecycle. This reduces the complexity and redundancy of many tasks that an eCommerce retailer typically faces from the time a customer places her order until the time it is delivered to them. Let’s look at why eCommerce retailers should prioritize automation in 2019. 1. Augmented Customer Experience + Personalization A PwC study titled “Experience is Everything” suggests that 42% of consumers would pay more for a friendly, welcoming experience. Way back in 2015, Gartner predicted that nearly 50% of companies would be implementing changes in their business model in order to augment customer experience. This is especially true for eCommerce, and automation in eCommerce certainly represents one of these changes. Customization and personalization of services are a huge boost for customer experience. A BCG report revealed that retailers who implemented personalization strategies saw a 6-10% boost in their sales. How can automation help? To start with, you can automate your email marketing campaigns and make them more personalized by adding recommendations, discount codes, and more. 2.  Fraud Prevention The scope for fraud on the Internet is huge. According to the October 2017 Global Fraud Index, Account Takeover fraud cost online retailers a whopping $3.3 billion dollars in Q2 2017 alone. Additionally, the average transaction rate for eCommerce fraud has also been on this rise. eCommerce retailers have been using a number of fraud prevention tools such as address verification service, CVN (card verification number), credit history check, and more to verify the buyer’s identity. An eCommerce tool equipped with Machine Learning capabilities such as Shuup, can detect fraudulent activity and effectively run through thousands of checks in the blink of an eye. Automating order handling can ensure that these preliminary checks are carried out without fail, in addition to carrying out specific checks to assess the riskiness of a particular order. Depending on the nature and scale of the enterprise, different retailers would want to set different thresholds for fraud detection, and eCommerce automation makes that possible. If a medium-risk transaction is detected, the system can be automated to notify the finance department for immediate review, whereas high-risk transactions can be canceled immediately. Automating your eCommerce software processes allows you to break the mold of one-size-fits-all coding and make the solution specific to the needs of your organization. 3. Better Customer Service Customer service and support is an essential part of the buying process. Automated customer support does not necessarily mean your customers will get canned responses to all their queries. However,  it means that common queries can be dealt with in a more efficient manner. Live chats and chatbots have become incredibly popular with eCommerce retailers because these features offer convenience to both the customer and the retailer. The retailer’s support staff will not be held up with routine inquiries and the customer can get their queries resolved quickly. Timely responses are a huge part of what constitutes positive customer experience. Live chat can even be used for shopping carts in order to decrease cart abandonment rates. Automating priority tickets as well as the follow-up on resolved and unresolved tickets is another way to automate customer service. With new generation automated CRM systems/ help desk software you can automate the tags that are used to filter tickets. It saves the customer service rep’s time and ensures that priority tickets are resolved quickly. Customer support/service can be made as a strategic asset in your overall strategy. The same Oracle study mentioned above suggests that back in 2011, 50% of customers would give a brand at most one week to respond to a query before they stopped doing business with them. You can imagine what those same numbers for 2018 would be. 4. Order Fulfillment Physical fulfillment of orders is prone to errors as it requires humans to oversee the warehouse selection. With an automated solution, you can set up an order fulfillment to match the warehouse requirements as closely as possible. This ensures that the closest warehouse which has the required item in stock is selected. This guarantees timely delivery of the order to the customer. It could also be set up to integrate with your billing software so as to calculate accurate billing/shipping charges, taxes (for out of state/overseas shipments), etc. Automation can also help manage your inventory effectively. Product availability is updated automatically with each transaction, be it a return or the addition of a new product. If the stock of a particular in-demand item was nearing danger levels, your eCommerce software would send an automated email to the supplier asking to replenish its stock ASAP. Automation ensures that your prerequisites to order fulfillment are fulfilled for successful processing and delivery of an order. Challenges with eCommerce automation The aforementioned benefits are not without some risks, just as any evolving concept tends to be. One of the most important challenges to making automation work for eCommerce smoothly is data accuracy. As eCommerce platforms gear up for greater multichannel/omnichannel retailing strategies, automation is certainly going to help them bolster and enhance their workflows, but only if the right checks are in place. Automation still has a long way to go, so for now, it might be best to focus on automating tasks that take up a lot of time such as updating inventory, reserving products for certain customers, updating customers on their orders, etc. Advanced applications such as fraud detection might still take a few years to be truly ‘automated’ and free from any need for human review. For now, eCommerce retailers still have a whole lot to look forward to. All things considered, automating the key tasks/functions of your eCommerce platform will impart flexibility, agility, and a scope for improved customer experience. Invest rightly in automation solutions for your eCommerce software to stay competitive in the dynamic, unpredictable eCommerce retail scenario. Author Bio Fretty Francis works as a Content Marketing Specialist at SoftwareSuggest, an online platform that recommends software solutions to businesses. Her areas of expertise include eCommerce platforms, hotel software, and project management software. In her spare time, she likes to travel and catch up on the latest technologies. Software developer tops the 100 Best Jobs of 2019 list by U.S. News and World Report IBM Q System One, IBM’s standalone quantum computer unveiled at CES 2019 Announcing ‘TypeScript Roadmap’ for January 2019- June 2019
Read more
  • 0
  • 0
  • 5670

article-image-5-types-of-deep-transfer-learning
Bhagyashree R
25 Nov 2018
5 min read
Save for later

5 types of deep transfer learning

Bhagyashree R
25 Nov 2018
5 min read
Transfer learning is a method of reusing a model or knowledge for another related task. Transfer learning is sometimes also considered as an extension of existing ML algorithms. Extensive research and work is being done in the context of transfer learning and on understanding how knowledge can be transferred among tasks. However, the Neural Information Processing Systems (NIPS) 1995 workshop Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems is believed to have provided the initial motivations for research in this field. The literature on transfer learning has gone through a lot of iterations, and the terms associated with it have been used loosely and often interchangeably. Hence, it is sometimes confusing to differentiate between transfer learning, domain adaptation, and multitask learning. Rest assured, these are all related and try to solve similar problems. In this article, we will look into the five types of deep transfer learning to get more clarity on how these differ from each other. [box type="shadow" align="" class="" width=""]This article is an excerpt from a book written by Dipanjan Sarkar, Raghav Bali, and Tamoghna Ghosh titled Hands-On Transfer Learning with Python. This book covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples.[/box] #1 Domain adaptation Domain adaptation is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P (Xs) ≠ P (Xt). There is an inherent shift or drift in the data distribution of the source and target domains that requires tweaks to transfer the learning. For instance, a corpus of movie reviews labeled as positive or negative would be different from a corpus of product-review sentiments. A classifier trained on movie-review sentiment would see a different distribution if utilized to classify product reviews. Thus, domain adaptation techniques are utilized in transfer learning in these scenarios. #2 Domain confusion Different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible. This can be achieved by applying certain preprocessing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper Return of Frustratingly Easy Domain Adaptation. This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, Domain-Adversarial Training of Neural Networks. The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion. #3 Multitask learning Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is depicted in the following diagram: Multitask learning: Learner receives information from all tasks simultaneously #4 One-shot learning Deep learning systems are data hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks, though such is not the case with human learning. For instance, once a child is shown what an apple looks like, they can easily identify a different variety of apple (with one or a few training examples); this is not the case with ML and deep learning algorithms. One-shot learning is a variant of transfer learning where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class (if it is a classification task) and in scenarios where new classes can be added often. The landmark paper by Fei-Fei and their co-authors, One Shot Learning of Object Categories, is supposedly what coined the term one-shot learning and the research in this subfield. This paper presented a variation on a Bayesian framework for representation learning for object categorization. This approach has since been improved upon, and applied using deep learning systems. #5 Zero-shot learning Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. This might sound unbelievable, especially when learning using examples is what most supervised learning algorithms are about. Zero-data learning, or zero-short learning, methods make clever adjustments during the training stage itself to exploit additional information to understand unseen data. In their book on Deep Learning, Goodfellow and their co-authors present zero-shot learning as a scenario where three variables are learned, such as the traditional input variable, x, the traditional output variable, y, and the additional random variable that describes the task, T. The model is thus trained to learn the conditional probability distribution of P(y | x, T). Zero-shot learning comes in handy in scenarios such as machine translation, where we may not even have labels in the target language. In this article we learned about the five types of deep transfer learning types: Domain adaptation, domain confusion, multitask learning, one-shot learning, and zero-shot learning. If you found this post useful, do check out the book, Hands-On Transfer Learning with Python, which covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples. CMU students propose a competitive reinforcement learning approach based on A3C using visual transfer between Atari games What is Meta Learning? Is the machine learning process similar to how humans learn?
Read more
  • 0
  • 0
  • 17612

article-image-deep-reinforcement-learning-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

Deep reinforcement learning - trick or treat?

Bhagyashree R
31 Oct 2018
2 min read
Deep Reinforcement Learning (Deep RL) is the new buzzword in the machine learning world. Deep RL is an approach which combines reinforcement learning and deep learning in order to achieve human-level performance. It brings together the self-learning approach to learn successful strategies that lead to the greatest long-term rewards and allows the agents to construct and learn their own knowledge directly from raw inputs. With the fusion of these two approaches, we saw the introduction of many algorithms, starting with DeepMind’s Deep Q Network (DQN). It is a deep variant of the Q-learning algorithm. This algorithm reached human-level performance in playing Atari games. Combining Q-learning with reasonably sized neural networks and some optimization tricks, you can achieve human or superhuman performance in several Atari games. Deep RL resulted in one of the notable advancements in the game of AlphaGo.The AI agent by DeepMind was able to beat the human world champions Lee Sedol (4-1) and Fan Hui (5-0). DeepMind then further released advanced versions of their Agent called AlphaGO Zero and AlphaZero. Many recent works from the researchers at UC Berkeley have shown how both reinforcement learning and deep reinforcement learning have enabled the control of complex robots, both for locomotion and navigation. Despite these successes, it is quite difficult to find cases where deep RL has added any practical real-world value. The current status is that it is still a research topic. One of its limitations is that it assumes the existence of a reward function, which is either given or is hand-tuned offline. To get the desired results, your reward function must capture exactly what you want. RL has an annoying tendency to overfit to your reward, resulting in things you haven’t expected. This is the reason why Atari is a benchmark, as it is not only easy to get a lot of samples, but the goal is fairly straightforward i.e to maximize score. With so many researchers working towards introducing improved Deep RL algorithms, it surely is a treat. AlphaZero: The genesis of machine intuition DeepMind open sources TRFL, a new library of reinforcement learning building blocks Understanding Deep Reinforcement Learning by understanding the Markov Decision Process [Tutorial]
Read more
  • 0
  • 0
  • 3556
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-teaching-ai-ethics-trick-or-treat
Natasha Mathur
31 Oct 2018
5 min read
Save for later

Teaching AI ethics - Trick or Treat?

Natasha Mathur
31 Oct 2018
5 min read
The Public Voice Coalition announced Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018, last week. “The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real-life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI”, reads the EPIC’s guideline page. Artificial Intelligence ethics aim to improve the design and use of AI, as well as to minimize the risk for society, as well as ensures the protection of human rights. AI ethics focuses on values such as transparency, fairness, reliability, validity, accountability, accuracy, and public safety. Why teach AI ethics? Without AI ethics, the wonders of AI can convert into the dangers of AI, posing strong threats to society and even human lives. One such example is when earlier this year, an autonomous Uber car, a 2017 Volvo SUV traveling at roughly 40 miles an hour, killed a woman in the street in Arizona. This incident brings out the challenges and nuances of building an AI system with the right set of values embedded in them. As different factors are considered for an algorithm to reach the required set of outcomes, it is more than possible that these criteria are not always shared transparently with the users and authorities. Other non-life threatening but still dangerous examples include the time when Google Allo, responded with a turban emoji on being asked to suggest three emoji responses to a gun emoji, and when Microsoft’s Twitter bot Tay, who tweeted racist and sexist comments. AI scientists should be taught at the early stages itself that they these values are meant to be at the forefront when deciding on factors such as the design, logic, techniques, and outcome of an AI project. Universities and organizations promoting learning about AI ethics What’s encouraging is that organizations and universities are taking steps (slowly but surely) to promote the importance of teaching ethics to students and employees working with AI or machine learning systems. For instance, The World Economic Forum Global Future Councils on Artificial Intelligence and Robotics has come out with “Teaching AI ethics” project that includes creating a repository of actionable and useful materials for faculties wishing to add social inquiry and discourse into their AI coursework. This is a great opportunity as the project connects professors from around the world and offers them a platform to share, learn and customize their curriculum to include a focus on AI ethics. Cornell, Harvard, MIT, Stanford, and the University of Texas are some of the universities that recently introduced courses on ethics when designing autonomous and intelligent systems. These courses put an emphasis on the AI’s ethical, legal, and policy implications along with teaching them about dealing with challenges such as biased data sets in AI. Mozilla has taken initiative to make people more aware of the social implications of AI in our society through its Mozilla’s Creative Media Awards. “We’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users”, reads the Mozilla awards page. Moreover, Mozilla also announced a $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates. Other examples include Google’s AI ethics principles announced back in June, to abide by when developing AI projects, and SAP’s AI ethics guidelines and an advisory panel created last month. SAP says that they have designed these guidelines as it “considers the ethical use of data a core value. We want to create software that enables intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent”. Other organizations, like Drivendata have come out with tools like Deon, a handy tool that helps data scientists add an ethics checklist to your data science projects, making sure that all projects are designed keeping ethics at the center. Some, however, feel that having to explain how an AI system reached a particular outcome (in the name of transparency) can put a damper on its capabilities. For instance, according to David Weinberger, a senior researcher at the Harvard Berkman Klein Center for Internet & society, “demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid”. Teaching AI ethics- trick or treat? AI has transformed the world as we know it. It has taken over different spheres of our lives and made things much simpler for us. However, to make sure that AI continues to deliver its transformative and evolutionary benefits effectively, we need ethics. From governments to tech organizations to young data scientists, everyone must use this tech responsibly. Having AI ethics in place is an integral part of the AI development process and will shape a healthy future of robotics and artificial intelligence. That is why teaching AI ethics is a sure-shot treat. It is a TREAT that will boost the productivity of humans in AI, and help build a better tomorrow.
Read more
  • 0
  • 0
  • 3281

article-image-uses-of-machine-learning-in-gaming
Natasha Mathur
22 Oct 2018
5 min read
Save for later

Uses of Machine Learning in Gaming

Natasha Mathur
22 Oct 2018
5 min read
All around us, our perception of learning and intellect is being challenged daily with the advent of new and emerging technologies. From self-driving cars, playing Go and Chess, to computers being able to beat humans at classic Atari games, the advent of a group of technologies we colloquially call Machine Learning have come to dominate a new era in technological growth – a new era of growth that has been compared with the same importance as the discovery of electricity and has already been categorized as the next human technological age. Games and simulations are no stranger to AI technologies and there are numerous assets available to the Unity developer in order to provide simulated machine intelligence. These technologies include content like Behavior Trees, Finite State Machine, navigation meshes, A*, and other heuristic ways game developers use to simulate intelligence. So, why Machine Learning and why now? The reason is due in large part to the OpenAI initiative, an initiative that encourages research across academia and the industry to share ideas and research on AI and ML. This has resulted in an explosion of growth in new ideas, methods, and areas for research. This means for games and simulations that we no longer have to fake or simulate intelligence. Now, we can build agents that learn from their environment and even learn to beat their human builders. This article is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. In this article, we look at the role that machine learning plays in game development. Machine Learning is an implementation of Artificial Intelligence. It is a way for a computer to assimilate data or state and provide a learned solution or response. We often think of AI now as a broader term to reflect a "smart" system. A full game AI system, for instance, may incorporate ML tools combined with more classic AIs like Behavior Trees in order to simulate a richer, more unpredictable AI. We will use AI to describe a system and ML to describe the implementation. How Machine Learning is useful in gaming Game engines have embraced the idea of incorporating ML into all aspects of its product and not just for use as a game AI. While most developers may try to use ML for gaming, it certainly helps game development in the following areas: Map/Level Generation: There are already plenty of examples where developers have used ML to auto-generate everything from dungeons to the realistic terrain. Getting this right can provide a game with endless replayability, but it can be some of the most challenging ML to develop. Texture/Shader Generation: Another area that is getting the attention of ML is texture and shader generation. These technologies are getting a boost brought on by the attention of advanced generative adversarial networks, or GAN. There are plenty of great and fun examples of this tech in action; just do a search for DEEP FAKES in your favorite search engine. Model Generation: There are a few projects coming to fruition in this area that could greatly simplify 3D object construction through enhanced scanning and/or auto-generation. Imagine being able to textually describe a simple model and having ML build it for you, in real-time, in a game or other AR/VR/MR app, for example. Audio Generation: Being able to generate audio sound effects or music on the fly is already being worked on for other areas, not just games. Yet, just imagine being able to have a custom designed soundtrack for your game developed by ML. Artificial Players: This encompasses many uses from the gamer themselves using ML to play the game on their behalf to the developer using artificial players as enhanced test agents or as a way to engage players during low activity. If your game is simple enough, this could also be a way of auto testing levels. NPCs or Game AI: Currently, there are better patterns out there to model basic behavioral intelligence in the form of Behavior Trees. While it's unlikely that BTs or other similar patterns will go away any time soon, imagine being able to model an NPC that may actually do an unpredictable, but rather cool behavior. This opens all sorts of possibilities that excite not only developers but players as well. So, we learned about different areas of the gaming world such as model generation, artificial players, NPCs, level generation, etc, where Machine learning can be extensively used. If you found this post useful, be sure to check out the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning' to learn more machine learning concepts in gaming. 5 Ways Artificial Intelligence is Transforming the Gaming Industry How should web developers learn machine learning? Deep Learning in games – Neural Networks set to design virtual worlds
Read more
  • 0
  • 0
  • 9286

article-image-julia-for-machine-learning-will-the-new-language-pick-up-pace
Prasad Ramesh
20 Oct 2018
4 min read
Save for later

Julia for machine learning. Will the new language pick up pace?

Prasad Ramesh
20 Oct 2018
4 min read
Machine learning can be done using many languages, with Python and R being the most popular. But one language has been overlooked for some time—Julia. Why isn’t Julia machine learning a thing? Julia isn't an obvious choice for machine learning simply because it's a new language that has only recently hit version 1.0. While Python is well-established, with a large community and many libraries, Julia simply doesn't have the community to shout about it. And that's a shame. Right now Julia is used in various fields. From optimizing milk production in dairy farms to parallel supercomputing for astronomy, Julia has a wide range of applications. A common theme here is that these actions all require numerical, scientific, and sometimes parallel computation. Julia is well-suited to the sort of tasks where intensive computation is essential. Viral Shah, CEO of Julia Computing said to Forbes “Amazon, Apple, Disney, Facebook, Ford, Google, Grindr, IBM, Microsoft, NASA, Oracle and Uber are other Julia users, partners and organizations hiring Julia programmers.” Clearly, Julia is powering the analytical nous of some of the most high profile organizations on the planet. Perhaps it just needs more cheerleading to go truly mainstream. Why Julia is a great language for machine learning Julia was originally designed for high-performance numerical analysis. This means that everything that has gone into its design is built for the very things you need to do to build effective machine learning systems. Speed and functionality Julia combines the functionality from various popular languages like Python, R, Matlab, SAS and Stata with the speed of C++ and Java. A lot of the standard LaTeX symbols can be used in Julia, with the syntax usually being the same as LaTeX. This mathematical syntax makes it easy for implementing mathematical formulae in code and make Julia machine learning possible. It also has in-built support for parallelism which allows utilization of multiple cores at once making it fast at computations. Julia’s loops and functions features are pretty fast, fast enough that you would probably notice significant performance differences against other languages. The performance can be almost comparable to C with very little code actually used. With packages like ArrayFire, generic code can be run on GPUs. In Julia, the multiple dispatch feature is very useful for defining number and array-like datatypes. Matrices, data tables work with good compatibility and performance. Julia has automatic garbage collection, a collection of libraries for mathematical calculations, linear algebra, random number generation, and regular expression matching. Libraries and scalability Julia machine learning can be done with powerful tools like MLBase.jl, Flux.jl, Knet.jl, that can be used for machine learning and artificial intelligence systems. It also has a scikit-learn implementation called ScikitLearn.jl. Although ScikitLearn.jl is not an official port, it is a useful additional tool for building machine learning systems with Julia. As if all those weren’t enough, Julia also has TensorFlow.jl and MXNet.jl. So, if you already have experience with these tools, in other implementations, the transition is a little easier than learning everything from scratch. Julia is also incredibly scalable. It can be deployed on large clusters quickly, which is vital if you’re working with big data across a distributed system. Should you consider Julia machine learning? Because it’s fast and possesses a great range of features, Julia could potentially overtake both Python and R to be the choice of language for machine learning in the future. Okay, maybe we shouldn’t get ahead of ourselves. But with Julia reaching the 1.0 milestone, and the language rising on the TIOBE index, you certainly shouldn’t rule out Julia when it comes to machine learning. Julia is also available to use in the popular tool Jupyter Notebook, paving a path for wider adoption. A note of caution, however, is important. Rather than simply dropping everything for Julia, it will be worth monitoring the growth of the language. Over the next 12 to 24 months we’ll likely see new projects and libraries, and the Julia machine learning community expanding. If you start hearing more noise about the language, it becomes a much safer option to invest your time and energy in learning it. If you are just starting off with machine learning, then you should stick to other popular languages. An experienced engineer, however, who already has a good grip on other languages shouldn’t be scared of experimenting with Julia - it gives you another option, and might just help you to uncover new ways of working and solving problems. Julia 1.0 has just been released What makes functional programming a viable choice for artificial intelligence projects? Best Machine Learning Datasets for beginners
Read more
  • 0
  • 0
  • 6867

article-image-how-is-artificial-intelligence-changing-the-mobile-developer-role
Bhagyashree R
15 Oct 2018
10 min read
Save for later

How is Artificial Intelligence changing the mobile developer role?

Bhagyashree R
15 Oct 2018
10 min read
Last year, at Google I/O, Sundar Pichai, the CEO of Google, said: “We are moving from a mobile-first world to an AI-first world” Is it only applicable to Google? Not really. In the recent past, we have seen several advancements in Artificial Intelligence and in parallel a plethora of intelligent apps coming into the market. These advancements are enabling developers to take their apps to the next level by integrating recommendation service, image recognition, speech recognition, voice translation, and many more cool capabilities. Artificial Intelligence is becoming a potent tool for mobile developers to experiment and innovate. The Artificial Intelligence components that are integral to mobile experiences, such as voice-based assistants and location-based services, increasingly require mobile developers to have a basic understanding of Artificial Intelligence to be effective. Of course, you don’t have to be Artificial Intelligence experts to include intelligent components in your app. But, you should definitely understand something about what you’re building into your app and why. After all AI in mobile is not just limited to calling an API, isn't it? There’s more to it and in this article we will explore how Artificial Intelligence will shape the mobile developer role in the immediate future. Read also: AI on mobile: How AI is taking over the mobile devices marketspace What is changing in the mobile developer role? Focus shifting to data With Artificial Intelligence becoming more and more accessible, intelligent apps are becoming the new norm for businesses. Artificial Intelligence strengthens the relationship between brands and customers, inspiring developers to build smart apps that increase user retention. This also means that developers have to direct their focus to data. They have to understand things like how the data will be collected? How will the data be fed to machines and how often will data input be needed? When nearly 1 in 4 people abandon an app after its first use, as a mobile app developer, you need to rethink how you drive in-app personalization and engagement. Explore “humanized” way of user-app interaction With so many chatbots such as Siri and Google Assistant coming into the market, we can see that “humanizing” the interaction between the user and the app is becoming mainstream. “Humanizing” is the process where the app becomes relatable to the user, and the more effective it is conducted, the more the end user will interact with the app. Users now want easy navigation and searching system and Artificial Intelligence fits perfectly in the scenario. The advances in technologies like text-to-speech, speech-to-text, Natural Language Processing, and cloud services, in general, have contributed to the mass adoption of these types of interfaces. Companies are increasingly expecting mobile developers to be comfortable working with AI functionalities Artificial Intelligence is the future. Companies are now expecting their mobile developers to know how to handle the huge amount of data generated every day and how to use it. Here's is an example of what Google wants their engineers to do: “We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day.” This open-ended requirement list shows that it is the right time to learn and embrace Artificial Intelligence as soon as possible. What skills do you need to build intelligent apps? Ideally, data scientists are the ones who conceptualize mathematical models and machine learning engineers are the ones who translate it into the code and train the model. But, when you are working in a resource-tight environment, for example in a start-up, you will be responsible for doing the end-to-end job. It is not as scary as it sounds, because you have several resources to get started with! Taking your first steps with machine learning as a service Learning anything starts with motivating yourself. Directly diving into the maths and coding part of machine learning might exhaust and bore you. That's why it's a good idea to know what the end goal of your entire learning process is going to be and what types of solutions are possible using machine learning. There are many products available that you can try to quickly get started such as Google Cloud AutoML (Beta), Firebase MLKit (Beta), and Fritz Mobile SDK, among others. Read also: Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Getting your hands dirty After getting a “warm-up” the next step will involve creating and training your own model. This is where you’ll be introduced to TensorFlow Lite, which is going to be your best friend throughout your journey as a machine learning mobile developer. There are many other machine learning tools coming into the market that you can make use of. These tools make building AI in mobile easier. For instance, you can use Dialogflow, a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can then integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. Read also: 7 Artificial Intelligence tools mobile developers need to know For practicing you can leverage an amazing codelab by Google, TensorFlow For Poets. It guides you through creating and training a custom image classification model. Through this codelab you will learn the basics of data collection, model optimization, and other key components involved in creating your own model. The codelab is divided into two parts. The first part covers creating and training the model, and the second part is focused on TensorFlow Lite which is the mobile version of TensorFlow that allows you to run the same model on a mobile device. Mathematics is the foundation of machine learning Love it or hate it, machine learning and Artificial Intelligence are built on mathematical principles like calculus, linear algebra, probability, statistics, and optimization. You need to learn some essential foundational concepts and the notation used to express them. There are many reasons why learning mathematics for machine learning is important. It will help you in the process of selecting the right algorithm which includes giving considerations to accuracy, training time, model complexity, number of parameters and number of features. Maths is needed when choosing parameter settings and validation strategies, identifying underfitting and overfitting by understanding the bias-variance tradeoff. Read also: Bias-Variance tradeoff: How to choose between bias and variance for your machine learning model [Tutorial] Read also: What is Statistical Analysis and why does it matter? What are the key aspects of Artificial Intelligence for mobile to keep in mind? Understanding the problem Your number one priority should be the user problem you are trying to solve. Instead of randomly integrating a machine learning model into an application, developers should understand how the model applies to the particular application or use case. This is important because you might end up building a great machine learning model with excellent accuracy rate, but if it does not solve any problem, it will end up being redundant. You must also understand that while there are many business problems which require machine learning approaches, not all of them do. Most business problems can be solved through simple analytics or a baseline approach. Data is your best friend Machine learning is dependent on data; the data that you use, and how you use it, will define the success of your machine learning model. You can also make use of thousands of open source datasets available online. Google recently launched a tool for dataset search named, Google Dataset Search which will make it easier for you to search the right dataset for your problem. Typically, there’s no shortage of data; however, the abundant existence of data does not mean that the data is clean, reliable, or can be used as intended. Data cleanliness is a huge issue. For example, a typical company will have multiple customer records for a single individual, all of which differ slightly. If the data isn’t clean, it isn’t reliable. The bottom line is, it’s a bad practice to just grabbing the data and using it without considering its origin. Read also: Best Machine Learning Datasets for beginners Decide which model to choose A machine learning algorithm is trained and the artifact that it creates after the training process is called the machine learning model. An ML model is used to find patterns in data without the developer having to explicitly program those patterns. We cannot look through such a huge amount of data and understand the patterns. Think of the model as your helper who will look through all those terabytes of data and extract knowledge and insights from the data. You have two choices here: either you can create your own model or use a pre-built model. While there are several pre-built models available, your business-specific use cases may require specialized models to yield the desired results. These off-the-shelf model may also need some fine-tuning or modification to deliver the value the app is intended to provide. Read also: 10 machine learning algorithms every engineer needs to know Thinking about resource utilization is important Artificial Intelligence-powered apps or apps, in general, should be developed with resource utilization in mind. Though companies are working towards improving mobile hardware, currently, it is not the same as what we can accomplish with GPU clusters in the cloud. Therefore, developers need to consider how the models they intend to use would affect resources including battery power and memory usage. In terms of computational resources, inferencing or making predictions is less costly than training. Inferencing on the device means that the models need to be loaded into RAM, which also requires significant computational time on the GPU or CPU. In scenarios that involve continuous inferencing, such as audio and image data which can chew up bandwidth quickly, on-device inferencing is a good choice. Learning never stops Maintenance is important, and to do that you need to establish a feedback loop and have a process and culture of continuous evaluation and improvement. A change in consumer behavior or a market trend can make a negative impact on the model. Eventually, something will break or no longer work as intended, which is another reason why developers need to understand the basics of what it is they’re adding to an app. You need to have some knowledge of how the Artificial Intelligence component that you just put together is working or how it could be made to run faster. Wrapping up Before falling for the Artificial Intelligence and machine learning hype, it’s important to understand and analyze the problem you are trying to solve. You should examine whether applying machine learning can improve the quality of the service, and decide if this improvement justifies the effort of deploying a machine learning model. If you just want a simple API endpoint and don’t want to dedicate much time in deploying a model, cloud-based web services are the best option for you. Tools like ML Kit for Firebase looks promising and seems like a good choice for startups or developers just starting out. TensorFlow Lite and Core ML are good options if you have mobile developers on your team or if you’re willing to get your hands a little dirty. Artificial Intelligence is influencing the app development process by providing us a data-driven approach for solving user problems. It wouldn't be surprising if in the near future Artificial Intelligence becomes a forerunning factor for app developers in their expertise and creativity. 10 useful Google Cloud Artificial Intelligence services for your next machine learning project [Tutorial] How Artificial Intelligence is going to transform the Data Center How Serverless computing is making Artificial Intelligence development easier
Read more
  • 0
  • 0
  • 6017
article-image-is-youtubes-ai-algorithm-evil
Amarabha Banerjee
30 Sep 2018
6 min read
Save for later

Is YouTube's AI Algorithm evil?

Amarabha Banerjee
30 Sep 2018
6 min read
YouTube is at the center of content creation, content distribution, and advertising activities for some time now. The impact of YouTube can be estimated from the 1.8 billion YouTube users worldwide. While the YouTube video hosting concept has been a great success story for content creators, the video viewing and recommendation model has been in the middle of a brewing controversy lately. The Controversy Logan Paul was already a top rated YouTube star when he stumbled across a hanging dead body in a Japanese forest which is famous as a suicide spot. After the initial shock and awe, Logan Paul seemed quite amused and commented “Dude, his hands are purple,” then he turned to his friends and giggled. “You ever stand next to a dead guy?”. This particular instance was a shocking moment for YouTubers all across the globe. Disapproving reactions had poured in and the video was taken down 24 hours later by YouTube. In those 24 hours, the video managed to garner 6 million views. Even after the furious backlash, users complained that they were still seeing recommendations of Logan Paul’s videos. That brought the emphasis back on the recommendation system that YouTube uses. YouTube Video Recommendation Back in 2005, when YouTube first started out, it had a uniform homepage for all users. This meant that every YouTube user would see the same homepage and the creators who would feature there, would get a huge boost in their viewership. Their selection was based on their subscriber count, views and user engagement metrics e.g. likes, comments, shares etc. This inspired other users to become creators and start contributing content to become a part of the YouTube family. In 2006, YouTube was bought by Google. Their policies and homepage started evolving gradually. As ads started showing on YouTube videos, the scenario changed quite quickly. Also, with the rapid rise in the number of users, Google had thought it to be a good idea to curate the homepage as per each user’s watch history, subscriptions, and likes. This was a good move in principle since it helped the users to see what they wanted to see. As a part of their next level innovation, a machine learning model was created to suggest or recommend videos to users. The goal of this deep neural network based recommendation engine was to increase watch time of every video so that users stay longer on the platform. What did it change and How When Youtube’s machine learning algorithm shows a few videos in your feed as “Recommended for you”, it predicts what you want to see from your watch history and watch history of similar users. If you interact with any of these videos and watch it for a certain amount of time, the recommendation engine considers it as a success and starts curating a list based on your interactions with its suggested videos. The more data it gathers about your choices and watch history, the more confident it becomes of its own video decisions. The major goal of Youtube’s recommendation engine is to attract your attention and get you hooked to the platform to get more watch time. More watch time means more revenue and more scope for targeted ads. What this changes, is the fundamental concept of choice and the exercising of user discretion. The moment the YouTube Algorithm considers watch time as the most important metric to recommend videos to you, less importance goes into the organic interactions on YouTube, which includes liking, commenting and subscribing to videos and channels. Users get to see video recommendations based on the YouTube Algorithm’s user understanding and its goal of maximizing watch time, with less importance given to user choices. Distorted Reality and YouTube This attention maximizing model is the fundamental working mechanism of mostly all social media networks. But YouTube has not been implicated in the accusation of distorting reality and spreading the fake news as much as Facebook has been in mainstream media. But times are changing and so are the viewpoints related to YouTube’s influence on the global population and its ability to manipulate important public opinion. Guillaume Chaslot, a 36-year-old French computer programmer with a Ph.D. in artificial intelligence, was one of those engineers who was in the core team to develop and perfect the YouTube algorithm. In his own words “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimizing for what is truthful, or balanced, or healthy for democracy.” Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals; the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.” Chaslot was fired by Google in 2013 over performance issues. His claim was that he wanted to bring about a change in the approach of the YouTube algorithm to make it more aligned with democratic values instead of being devoted to just increasing the watch time. Where are we headed I am not qualified or righteous enough to answer the direct question - is YouTube good or bad. YouTube creates opportunities for millions of creators worldwide to showcase their talent and present it to a global audience without worrying about country or boundaries. This itself is a huge power for an internet application. But the crucial point to remember here is whether YouTube is using this power to just make the users glued to the screen. Do they really care if you are seeing divisive content or prejudiced flat earther conspiracies as recommended videos? The algorithm can be tweaked to include parameters which will remove unintended bias such as whether a video is propagating fake news or influencing voters minds in an unlawful way. But that is near impossible as machines lack morality or empathy or even common sense. To incorporate humane values such as honesty and morality into an AI system is like creating an AI that is more human than a machine. This is why machine augmented human intelligence will play a more and more crucial role in the near future. The possibilities are endless, be it good or bad. Whether we progress or digress, might not be in our hands anymore. But what might be in our hands is to come together to put effective checkpoints to identify and course correct scenarios where algorithms rule wild. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms Like newspapers, Google algorithms are protected by the First amendment California replaces cash bail with algorithms
Read more
  • 0
  • 0
  • 7804

article-image-what-is-core-ml
Savia Lobo
28 Sep 2018
5 min read
Save for later

What is Core ML?

Savia Lobo
28 Sep 2018
5 min read
Introduced by Apple, CoreML is a machine learning framework that powers the iOS app developers to integrate machine learning technology into their apps. It supports natural language processing (NLP), image analysis, and various other conventional models to provide a top-notch on-device performance with minimal memory footprint and power consumption. This article is an extract taken from the book Machine Learning with Core ML written by Joshua Newnham. In this article, you will get to know the basics of what CoreML is and its typical workflow. With the release of iOS 11 and Core ML, performing inference is just a matter of a few lines of code. Prior to iOS 11, inference was possible, but it required some work to take a pre-trained model and port it across using an existing framework such as Accelerate or metal performance shaders (MPSes). Accelerate and MPSes are still used under the hood by Core ML, but Core ML takes care of deciding which underlying framework your model should use (Accelerate using the CPU for memory-heavy tasks and MPSes using the GPU for compute-heavy tasks). It also takes care of abstracting a lot of the details away; this layer of abstraction is shown in the following diagram: There are additional layers too; iOS 11 has introduced and extended domain-specific layers that further abstract a lot of the common tasks you may use when working with image and text data, such as face detection, object tracking, language translation, and named entity recognition (NER). These domain-specific layers are encapsulated in the Vision and natural language processing (NLP) frameworks; we won't be going into any details of these frameworks here, but you will get a chance to use them in later chapters: It's worth noting that these layers are not mutually exclusive and it is common to find yourself using them together, especially the domain-specific frameworks that provide useful preprocessing methods we can use to prepare our data before sending to a Core ML model. So what exactly is Core ML? You can think of Core ML as a suite of tools used to facilitate the process of bringing ML models to iOS and wrapping them in a standard interface so that you can easily access and make use of them in your code. Let's now take a closer look at the typical workflow when working with Core ML. CoreML Workflow As described previously, the two main tasks of a ML workflow consist of training and inference. Training involves obtaining and preparing the data, defining the model, and then the real training. Once your model has achieved satisfactory results during training and is able to perform adequate predictions (including on data it hasn't seen before), your model can then be deployed and used for inference using data outside of the training set. Core ML provides a suite of tools to facilitate getting a trained model into iOS, one being the Python packaged released called Core ML Tools; it is used to take a model (consisting of the architecture and weights) from one of the many popular packages and exporting a .mlmodel file, which can then be imported into your Xcode project. Once imported, Xcode will generate an interface for the model, making it easily accessible via code you are familiar with. Finally, when you build your app, the model is further optimized and packaged up within your application. A summary of the process of generating the model is shown in the following diagram:   The previous diagram illustrates the process of creating the .mlmodel;, either using an existing model from one of the supported frameworks, or by training it from scratch. Core ML Tools supports most of the frameworks, either internal or as third party plug-ins, including  Keras, Turi, Caffe, scikit-learn, LibSVN, and XGBoost frameworks. Apple has also made this package open source and modular for easy adaption for other frameworks or by yourself. The process of importing the model is illustrated in this diagram: In addition; there are frameworks with tighter integration with Core ML that handle generating the Core ML model such as Turi Create, IBM Watson Services for Core ML, and Create ML. We will be introducing Create ML in chapter 10; for those interesting in learning more about Turi Create and IBM Watson Services for Core ML then please refer to the official webpages via the following links: Turi Create; https://github.com/apple/turicreate IBM Watson Services for Core ML; https://developer.apple.com/ibm/ Once the model is imported, as mentioned previously, Xcode generates an interface that wraps the model, model inputs, and outputs. Thus, in this post, we learned about the workflow of training and how to import a model. If you've enjoyed this post, head over to the book  Machine Learning with Core ML  to delve into the details of what this model is and what Core ML currently supports. Emotional AI: Detecting facial expressions and emotions using CoreML [Tutorial] Build intelligent interfaces with CoreML using a CNN [Tutorial] Watson-CoreML: IBM and Apple’s new machine learning collaboration project
Read more
  • 0
  • 0
  • 6149

article-image-how-can-artificial-intelligence-support-your-big-data-architecture
Natasha Mathur
26 Sep 2018
6 min read
Save for later

How can Artificial Intelligence support your Big Data architecture?

Natasha Mathur
26 Sep 2018
6 min read
Getting a big data project in place is a tough challenge. But making it deliver results is even harder. That’s where artificial intelligence comes in. By integrating artificial intelligence into your big data architecture, you’ll be able to better manage, and analyze data in a way that provide a substantial impact on your organization. With big data getting even bigger over the next couple of years, AI won’t simply be an optional extra, it will be essential. According to IDC, the accumulated volume of big data will increase from 4.4 zettabytes to roughly 44 zettabytes or 44 trillion GB, by 2020. Only by using Artificial Intelligence will you really be able to properly leverage such huge quantities of data. The International Data Corporation (IDC) also predicted a need for 181,000 people with deep analytical skills, data management, and interpretation skills, this year. AI comes to rescue again. AI can ultimately compensate for the lack of analytical resources today with the power of machine learning that enables automation.  Now that we know why Big data needs AI, let’s have a look at how AI helps big data. But, for that, you first need to understand the big data architecture. While it’s clear that artificial intelligence is an important development in the context of big data, what are the specific ways it can support and augment your big data architecture? It can, in fact, help you across every component in the architecture. That’s good news for anyone working with big data, and good for organizations that depend on it for growth as well. Artificial Intelligence in Big data Architecture In a big data architecture, data is collected from different data sources and then moves forward to other layers. Artificial Intelligence in data sources Using machine learning, this process of structuring data becomes easier, thereby, making it easier for the organizations to store and analyze their data. Now, keep in mind that large amounts of data from various sources can sometimes make data analysis even harder. This is because we now have access to heterogeneous sources of data that add different dimensions and attributes to the data. This further slows down the entire process of collecting data. To make things much quicker and more accurate, it’s important to consider only the most important dimensions. This process is what’s called data dimensionality reduction (DDR). With DDR, it is important to keep note of the fact that the model should always convey the same information without any loss of insight or intelligence. Principal Component Analysis or PCA is another useful machine learning method that’s used for dimensionality reduction. PCA performs feature extraction, meaning it combines all the input variables from the data, then drops the “least important” variables while making sure to retain the most valuable parts of all of the variables. Also, each of the “new” variables after PCA are independent of each other. Artificial Intelligence in data storage Once data is collected from the data source, it then needs to be stored. AI can allow you to automate storage with machine learning. This also makes structuring the data easier. Machine learning models automatically learn to recognize patterns, regularities, and interdependencies from unstructured data and then adapt, dynamically and independently, to new situations. K-means clustering is one of the most popular unsupervised algorithms for data clustering, which is used when there’s large-scale data without any defined categories or groups. The K-means Clustering algorithm performs pre-clustering or classification of data into larger categories. Unstructured data gets stored as binary objects, annotations are stored in NoSQL databases, and raw data is ingested into data lakes. All this data act as input to machine learning models. This approach is great as it automates refining of the large-scale data. So, as the data keeps coming, the machine learning model will keep storing it depending on what category it fits. Artificial Intelligence in data analysis After the data storage layer comes the data analysis part. There are numerous machine learning algorithms that help with effective and quick data analysis in big data architecture. One such algorithm that can really step up the game when it comes to data analysis is Bayes Theorem. Bayes theorem uses stored data to ‘predict’ the future. This makes it a wonderful fit for big data. The more data you feed to a Bayes algorithm, the more accurate its predictive results become. Bayes Theorem determines the probability of an event based on prior knowledge of conditions that might be related to the event. Another machine learning algorithm that great for performing data analysis are decision trees. Decision trees help you reach a particular decision by presenting all possible options and their probability of occurrence. They’re extremely easy to understand and interpret. LASSO (least absolute shrinkage and selection operator) is another algorithm that will help with data analysis. LASSO is a regression analysis method. It is capable of performing both variable selection and regularization which enhances the prediction accuracy and interpretability of the outcome model. The lasso regression analysis can be used to determine which of your predictors are most important. Once the analysis is done, the results are presented to other users or stakeholders. This is where data utilization part comes into play. Data helps to inform decision making at various levels and in different departments within an organization. Artificial intelligence takes big data to the next level Heaps of data gets generated every day by organizations all across the globe. Given such huge amount of data, it can sometimes go beyond the reach of current technologies to get right insights and results out of this data. Artificial intelligence takes the big data process to another level, making it easier to manage and analyze a complex array of data sources. This doesn’t mean that humans will instantly lose their jobs - it simply means we can put machines to work to do things that even the smartest and most hardworking humans would be incapable of. There’s a saying that goes "big data is for machines; small data is for people”, and it couldn’t be any truer. 7 AI tools mobile developers need to know How AI is going to transform the Data Center How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 3785
article-image-how-will-ai-impact-job-roles-in-cybersecurity
Melisha Dsouza
25 Sep 2018
7 min read
Save for later

How will AI impact job roles in Cybersecurity

Melisha Dsouza
25 Sep 2018
7 min read
"If you want a job for the next few years, work in technology. If you want a job for life, work in cybersecurity." -Aaron Levie, chief executive of cloud storage vendor Box The field of cybersecurity will soon face some dire, but somewhat conflicting, views on the availability of qualified cybersecurity professionals over the next four or five years. Global Information Security Workforce Study from the Center for Cyber Safety and Education, predicts a shortfall of 1.8 million cybersecurity workers by 2022. The cybersecurity workforce gap will hit 1.8 million by 2022. On the flipside,  Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures highlight that there will be 3.5 million cybersecurity job openings by 2021. Cybercrime will feature more than triple the number of job openings over the next 5 years. Living in the midst of a digital revolution caused by AI- we can safely say that AI will be the solution to the dilemma of “what will become of human jobs in cybersecurity?”. Tech enthusiasts believe that we will see a new generation of robots that can work alongside humans and complement or, maybe replace, them in ways not envisioned previously. AI will not make jobs easier to accomplish, but also bring about new job roles for the masses. Let’s find out how. Will AI destroy or create jobs in Cybersecurity? AI-driven systems have started to replace humans in numerous industries. However, that doesn’t appear to be the case in cybersecurity. While automation can sometimes reduce operational errors and make it easier to scale tasks, using AI to spot cyberattacks isn’t completely practical because such systems yield a large number of false positives. It lacks the contextual awareness which can lead to attacks being wrongly identified or missed completely. As anyone who’s ever tried to automate something knows, automated machines aren’t great at dealing with exceptions that fall outside of the parameters to which they have been programmed. Eventually, human expertise is needed to analyze potential risks or breaches and make critical decisions. It’s also worth noting that completely relying on artificial intelligence to manage security only leads to more vulnerabilities - attacks could, for example, exploit the machine element in automation. Automation can support cybersecurity professionals - but shouldn’t replace them Supported by the right tools, humans can do more. They can focus on critical tasks where an automated machine or algorithm is inappropriate. In the context of cybersecurity, artificial intelligence can do much of the 'legwork' at scale in processing and analyzing data, to help inform human decision making. Ultimately, this isn’t a zero-sum game -  humans and AI can work hand in hand to great effects. AI2 Take, for instance, the project led by the experts at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Lab. AI2 (Artificial Intelligence + Analyst Intuition) is a system that combines the capabilities of AI with the intelligence of human analysts to create an adaptive cybersecurity solution that improves over time. The system uses the PatternEx machine learning platform, and combs through data looking for meaningful, predefined patterns. For instance, a sudden spike in postback events on a webpage might indicate an attempt at staging a SQL injection attack. The top results are then presented to a human analyst, who will separate any false positives and flags legitimate threats. The information is then fed into a virtual analyst that uses human input to learn and improve the system’s detection rates. On future iterations, a more refined dataset is presented to the human analyst, who goes through results and once again "teaches" the system to make better decisions. AI2 is a perfect example that shows man and machine can complement each other’s strengths to create something even more effective. It’s worth remembering that in any company that uses AI for cybersecurity, automated tools and techniques require significant algorithm training, data markup. New cybersecurity job roles and the evolution of the job market The bottom line of this discussion is that- AI will not destroy cybersecurity jobs, but it will drastically change them. The primary focus of many cybersecurity jobs can be going through the hundreds of security tools available, determining what tools and techniques are most appropriate for their organization’s needs. Of course, as systems move to the cloud, these decisions will already be made because cloud providers will offer in-built security solutions. This means that the number of companies that will need a full staff of cybersecurity experts will be drastically reduced. Instead, companies will need more individuals that understand issues like the potential business impact and risk of different projects and architectural decisions. This demands a very different set of skills and knowledge compared to the typical current cybersecurity role - it is less directly technical and will require more integration with other key business decision makers. AI can provide assistance, but it can’t offer easy answers. Humans and AI working together Companies concerned with cybersecurity legal compliance and effective real-world solutions should note that cybersecurity and information technology professionals are best-suited for tasks such as risk analysis, policy formulation and cyber attack response. Human intervention is can help AI systems to learn and evolve. Take the example of Spain-based antivirus company called Panda Security, that had a number of people reverse-engineering malicious code and writing signatures. In today's times, to keep pace with overflowing amounts of data, the company would need hundreds of thousands of engineers to deal with malicious code. Enter AI and only a small team of engineers are required-- to look at more than 200,000 new malware samples per day. Is AI going to steal cybersecurity engineers their jobs? So what about former employees that used to perform this job? Have they been laid off? The answer is a straight No! But they will need to upgrade their skill set. In the world of cybersecurity, AI is going to create new jobs, as it throws up new problems to be analyzed and solved. It’s going to create what are being called "new collar" jobs - this is something that IBM’s hiring strategy has already taken into account. Once graduates enter the IBM workforce, AI enters the equation to help them get a fast start. Even the Junior analysts can have the ability to investigate a new malware infecting mobile phones of employees. AI would quickly research the new malware impacting the phones, and identify the characteristics reported by others and to provide a recommended course of action. This would relieve analysts from the manual work of going through reams of data and lines of code - in theory, it should make their job more interesting and more fun. Artificial intelligence and the human workforce, then, aren’t in conflict when it comes to cybersecurity. Instead, they can complement each other to create new job opportunities that will test the skills of the upcoming generation, and lead experienced professionals into new and maybe more interesting directions. It will be interesting to see how cybersecurity workforce makes use of AI in the future. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce 5 ways artificial intelligence is upgrading software engineering    
Read more
  • 0
  • 0
  • 8734

article-image-7-ai-tools-mobile-developers-need-to-know
Bhagyashree R
20 Sep 2018
11 min read
Save for later

7 AI tools mobile developers need to know

Bhagyashree R
20 Sep 2018
11 min read
Advancements in artificial intelligence (AI) and machine learning has enabled the evolution of mobile applications that we see today. With AI, apps are now capable of recognizing speech, images, and gestures, and translate voices with extraordinary success rates. With a number of apps hitting the app stores, it is crucial that they stand apart from competitors by meeting the rising standards of consumers. To stay relevant it is important that mobile developers keep up with these advancements in artificial intelligence. As AI and machine learning become increasingly popular, there is a growing selection of tools and software available for developers to build their apps with. These cloud-based and device-based artificial intelligence tools provide developers a way to power their apps with unique features. In this article, we will look at some of these tools and how app developers are using them in their apps. Caffe2 - A flexible deep learning framework Source: Qualcomm Caffe2 is a lightweight, modular, scalable deep learning framework developed by Facebook. It is a successor of Caffe, a project started at the University of California, Berkeley. It is primarily built for production use cases and mobile development and offers developers greater flexibility for building high-performance products. Caffe2 aims to provide an easy way to experiment with deep learning and leverage community contributions of new models and algorithms. It is cross-platform and integrates with Visual Studio, Android Studio, and Xcode for mobile development. Its core C++ libraries provide speed and portability, while its Python and C++ APIs make it easy for you to prototype, train, and deploy your models. It utilizes GPUs when they are available. It is fine-tuned to take full advantage of the NVIDIA GPU deep learning platform. To deliver high performance, Caffe2 uses some of the deep learning SDK libraries by NVIDIA such as cuDNN, cuBLAS, and NCCL. Functionalities Enable automation Image processing Perform object detection Statistical and mathematical operations Supports distributed training enabling quick scaling up or down Applications Facebook is using Caffe2 to help their developers and researchers train large machine learning models and deliver AI on mobile devices. Using Caffe2, they significantly improved the efficiency and quality of machine translation systems. As a result, all machine translation models at Facebook have been transitioned from phrase-based systems to neural models for all languages. OpenCV - Give the power of vision to your apps Source: AndroidPub OpenCV short for Open Source Computer Vision Library is a collection of programming functions for real-time computer vision and machine learning. It has C++, Python, and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. It also supports the deep learning frameworks TensorFlow and PyTorch. Written natively in C/C++, the library can take advantage of multi-core processing. OpenCV aims to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. The library consists of more than 2500 optimized algorithms including both classic and state-of-the-art computer vision algorithms. Functionalities These algorithms can be used for the following: To detect and recognize faces Identify objects Classify human actions in videos Track camera movements and moving objects Extract 3D models of objects Produce 3D point clouds from stereo cameras Stitch images together to produce a high-resolution image of an entire scene Find similar images from an image database Applications Plickers is an assessment tool, that lets you poll your class for free, without the need for student devices. It uses OpenCV as its graphics and video SDK. You just have to give each student a card called a paper clicker, and use your iPhone/iPad to scan them to do instant checks-for-understanding, exit tickets, and impromptu polls. Also check out FastCV BoofCV TensorFlow Lite and Mobile - An Open Source Machine Learning Framework for Everyone Source: YouTube TensorFlow is an open source software library for building machine learning models. Its flexible architecture allows easy model deployment across a variety of platforms ranging from desktops to mobile and edge devices. Currently, TensorFlow provides two solutions for deploying machine learning models on mobile devices: TensorFlow Mobile and TensorFlow Lite. TensorFlow Lite is an improved version of TensorFlow Mobile, offering better performance and smaller app size. Additionally, it has very few dependencies as compared to TensorFlow Mobile, so it can be built and hosted on simpler, more constrained device scenarios. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. But the catch here is that TensorFlow Lite is currently in developer preview and only has coverage to a limited set of operators. So, to develop production-ready mobile TensorFlow apps, it is recommended to use TensorFlow Mobile. Also, TensorFlow Mobile supports customization to add new operators not supported by TensorFlow Mobile by default, which is a requirement for most of the models of different AI apps. Although TensorFlow Lite is in developer preview, its future releases “will greatly simplify the developer experience of targeting a model for small devices”. It is also likely to replace TensorFlow Mobile, or at least overcome its current limitations. Functionalities Speech recognition Image recognition Object localization Gesture recognition Optical character recognition Translation Text classification Voice synthesis Applications The Alibaba tech team is using TensorFlow Lite to implement and optimize speaker recognition on the client side. This addresses many of the common issues of the server-side model, such as poor network connectivity, extended latency, and poor user experience. Google uses TensorFlow for advanced machine learning models including Google Translate and RankBrain. Core ML - Integrate machine learning in your iOS apps Source: AppleToolBox Core ML is a machine learning framework which can be used to integrate machine learning model in your iOS apps. It supports Vision for image analysis, Natural Language for natural language processing, and GameplayKit for evaluating learned decision trees. Core ML is built on top of the following low-level APIs, providing a simple higher level abstraction to these: Accelerate optimizes large-scale mathematical computations and image calculations for high performance. Basic neural network subroutines (BNNS) provides a collection of functions using which you can implement and run neural networks trained with previously obtained data. Metal Performance Shaders is a collection of highly optimized compute and graphic shaders that are designed to integrate easily and efficiently into your Metal app. To train and deploy custom models you can also use the Create ML framework. It is a machine learning framework in Swift, which can be used to train models using native Apple technologies like Swift, Xcode, and Other Apple frameworks. Functionalities Face and face landmark detection Text detection Barcode recognition Image registration Language and script identification Design games with functional and reusable architecture Applications Lumina is a camera designed in Swift for easily integrating Core ML models - as well as image streaming, QR/Barcode detection, and many other features. ML Kit by Google - Seamlessly build machine learning into your apps Source: Google ML Kit is a cross-platform suite of machine learning tools for its Firebase mobile development platform. It comprises of Google's ML technologies, such as the Google Cloud Vision API, TensorFlow Lite, and the Android Neural Networks API together in a single SDK enabling you to apply ML techniques to your apps easily. You can leverage its ready-to-use APIs for common mobile use cases such as recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. If these APIs don't cover your machine learning problem, you can use your own existing TensorFlow Lite models. You just have to upload your model on Firebase and ML Kit will take care of the hosting and serving. These APIs can run on-device or in the cloud. Its on-device APIs process your data quickly and work even when there’s no network connection. Its cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give you an even higher level of accuracy. Functionalities Automate tedious data entry for credit cards, receipts, and business cards, or help organize photos. Extract text from documents, which you can use to increase accessibility or translate documents. Real-time face detection can be used in applications like video chat or games that respond to the player's expressions. Using image labeling you can add capabilities such as content moderation and automatic metadata generation. Applications A popular calorie counter app, Lose It! uses Google ML Kit Text Recognition API to quickly capture nutrition information to ensure it’s easy to record and extremely accurate. PicsArt uses ML Kit custom model APIs to provide TensorFlow–powered 1000+ effects to enable millions of users to create amazing images with their mobile phones. Dialogflow - Give users new ways to interact with your product Source: Medium Dialogflow is a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. With Dialogflow you can build interfaces, such as chatbots and conversational IVR that enable natural and rich interactions between your users and your business. It provides this human-like interaction with the help of agents. Agents can understand the vast and varied nuances of human language and translate that to standard and structured meaning that your apps and services can understand. It comes in two types: Dialogflow Standard Edition and Dialogflow Enterprise Edition. Dialogflow Enterprise Edition users have access to Google Cloud Support and a service level agreement (SLA) for production deployments. Functionalities Provide customer support One-click integration on 14+ platforms Supports multilingual responses Improve NLU quality by training with negative examples Debug using more insights and diagnostics Applications Domino’s simplified the process of ordering pizza using Dialogflow’s conversational technology. Domino's leveraged large customer service knowledge and Dialogflow's NLU capabilities to build both simple customer interactions and increasingly complex ordering scenarios. Also check out Wit.ai Rasa NLU Microsoft Cognitive Services - Make your apps see, hear, speak, understand and interpret your user needs Source: Neel Bhatt Cognitive Services is a collection of APIs, SDKs, and services to enable developers easily add cognitive features to their applications such as emotion and video detection, facial, speech, and vision recognition, among others. You need not be an expert in data science to make your systems more intelligent and engaging. The pre-built services come with high-quality RESTful intelligent APIs for the following: Vision: Make your apps identify and analyze content within images and videos. Provides capabilities such as image classification, optical character recognition in images, face detection, person identification, and emotion identification. Speech: Integrate speech processing capabilities into your app or services such as text-to-speech, speech-to-text, speaker recognition, and speech translation. Language: Your application or service will understand the meaning of the unstructured text or the intent behind a speaker's utterances. It comes with capabilities such as text sentiment analysis, key phrase extraction, automated and customizable text translation. Knowledge: Create knowledge-rich resources that can be integrated into apps and services. It provides features such as QnA extraction from unstructured text, knowledge base creation from collections of Q&As, and semantic matching for knowledge bases. Search: Using Search API you can find exactly what you are looking for across billions of web pages. It provides features like ad-free, safe, location-aware web search, Bing visual search, custom search engine creation, and many more. Applications To safeguard against fraud, Uber uses the Face API, part of Microsoft Cognitive Services, to help ensure the driver using the app matches the account on file. Cardinal Blue developed an app called PicCollage, a popular mobile app that allows users to combine photos, videos, captions, stickers, and special effects to create unique collages. Also check out AWS machine learning services IBM Watson These were some of the tools that will help you integrate intelligence into your apps. These libraries make it easier to add capabilities like speech recognition, natural language processing, computer vision, and many others, giving users the wow moment of accomplishing something that wasn’t quite possible before. Along with choosing the right AI tool, you must also consider other factors that can affect your app performance. These factors include the accuracy of your machine learning model, which can be affected by bias and variance, using correct datasets for training, seamless user interaction, and resource-optimization, among others. While building any intelligent app it is also important to keep in mind that the AI in your app is solving a problem and it doesn’t exist because it is cool. Thinking from the user’s perspective will allow you to assess the importance of a particular problem. A great AI app will not just help users do something faster, but enable them to do something they couldn’t do before. With the growing popularity and the need to speed up the development of intelligent apps, many companies ranging from huge tech giants to startups are providing AI solutions. In the future we will definitely see more developer tools coming into the market, making AI in apps a norm. 6 most commonly used Java Machine learning libraries 5 ways artificial intelligence is upgrading software engineering Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence
Read more
  • 0
  • 0
  • 18594