Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
AI Blueprints
AI Blueprints

AI Blueprints: How to build and deploy AI business projects

By Dr. Joshua Eckroth , Eric Schoen
Free Trial
Book Dec 2018 250 pages 1st Edition
eBook
₱1,796.99
Print
₱2,245.99
Subscription
Free Trial
eBook
₱1,796.99
Print
₱2,245.99
Subscription
Free Trial

What do you get with a Packt Subscription?

Free for first 7 days. $15.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

AI Blueprints

Chapter 1. The AI Workflow

Like so many technologies before and surely an infinite progression of technologies to come, a rtificial intelligence (AI) is the promising idea du jour. Due to recent advances in hardware and learning algorithms, new commercial-grade software platforms, and a proliferation of large datasets for training, any software developer can build an intelligent system that sees (for example, face recognition), listens (for example, writing emails by voice), and understands (for example, asking Amazon's Alexa or Google Home to set a reminder). With free off-the-shelf software, any company can have their own army of chatbots, automated sales agents customized to each potential customer, and a team of tireless web bots that scan the media for mentions and photos and videos of a company's products, among other use cases. All of these solutions may be built by regular software developers in regular companies, not just researchers in well-funded institutions.

But any seasoned professional knows that the risk associated with technology is proportional to its newness, complexity, and the number of exclamation points in its marketing copy. Tried-and-true techniques are low risk but might keep a company from taking advantage of new opportunities. AI, like any promise of intelligent automation, must be built and deployed with a specific business outcome in mind. One must have a detailed plan for integrating it into existing workflows and procedures, and should regularly monitor it to ensure the context in which the AI was deployed does not gradually or dramatically change, rendering the AI either useless, or worse, a rogue agent run amok..

This book combines practical AI techniques with advice and strategies for successful deployment. The projects are aimed at small organizations that want to explore new uses of AI in their organizations. Each project is developed to work in a realistic environment and solve a useful task. While virtually all other books, videos, courses, and blogs focus solely on AI techniques, this book helps the reader ensure that the AI makes sense and continues to work effectively.

In this first chapter, we're going to cover:

  • The role of AI in software systems

  • The details of a unique AI workflow that guides the development

  • A brief overview of this book's coding projects

AI isn't everything


"The passion caused by the great and sublime in nature, when those causes operate most powerfully, is astonishment; and astonishment is that state of the soul, in which all its motions are suspended, with some degree of horror. [...] When danger or pain press too nearly, they are incapable of giving any delight and are simply terrible; but at certain distances, and with certain modifications, they may be, and they are delightful, as we every day experience."

– Edmund Burke(Philosophical Enquiry into the Origin of our Ideas of the Sublime and the Beautiful, 1757)

Edmund Burke's careful study of the distinctions between what is aesthetically pleasing or beautiful versus what is compelling, astonishing, frightening, and sublime is an appropriate metaphor for the promises and the fears engendered by AI. At certain distances, that is, with the right design and careful deployment, AI has that quality that makes one marvel at the machine. If borne from a fear of being left behind or deployed haphazardly, if developed to solve a problem that does not exist, AI is a fool's game that can severely damage a company or brand.

Curiously, some of our top thinkers and entrepreneurs appear to be anxious about the careful balance between delight and horror. They have cautioned the world:

"Success in creating AI would be the biggest event in human history […] Unfortunately; it might also be the last."

– Stephen Hawking (https://futurism.com/hawking-creating-ai-could-be-the-biggest-event-in-the-history-of-our-civilization/)

"I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it's probably that."

– Elon Musk (https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat)

"First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

– Bill Gates (https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third/co3r3g8/)

The fear of AI seems to be rooted in a fear of loss of control. Once the AI is "smart enough," it is thought that the AI will no longer obey our commands. Or it will make its own disastrous decisions and not inform us. Or it will hide critical information from us and make us subjects to its all-powerful will.

But these concerns may be flipped around to benefits: a smart AI can inform us of when we are making a bad decision and prevent embarrassments or catastrophes; it can automate tedious tasks such as making cold calls to open a sales channel; it can aggregate, summarize, and highlight just the right information from a deluge of data to help us make more informed and appropriate decisions. In short, good AI may be distinguished from bad AI by looking at its design, whether there are bugs or faults, its context of use in terms of correct inputs and sanity checks on its outputs, and the company's continuous evaluation methodology to keep track of the performance of the AI after it is deployed. This book aims to show readers how to build good AI by following these practices.

Although this book includes detailed discussion and code for a variety of AI techniques and use cases, the AI component of a larger system is usually very small. This book introduces planning and constraint solving, natural language processing (NLP), sentiment analysis, recommendation engines, anomaly detection, and neural networks. Each of these techniques is sufficiently exciting and complex to warrant textbooks, PhDs, and conferences dedicated to their elucidation and study. But they are a very small part of any deployed software system.

Consider the following diagram, showing some everyday concerns of a modern software developer:

Although probably the most interesting part of a project, the AI component is often the least troublesome part of software development. As we will see throughout this book, AI techniques are often contained in a single project module or class or function. The performance of the AI component depends almost entirely on the appropriateness of the inputs and correct handling and cleanup of the outputs.

For example, an AI component that determines the sentiment, positive or negative, of a tweet or product review is relatively straightforward to implement, particularly with today's AI software libraries (though the code in the library is quite complex). On the other hand, acquiring the tweets or reviews (likely involving authentication and rate limiting), formatting and cleaning the text (especially handling odd Unicode characters and emojis), and saving the output of sentiment analysis into a database for summarization and real-time visualization takes far more work than the "intelligent" part of the whole process.

But the AI is the most interesting part. Without it, there is no insight and no automation. And particularly in today's hyped environment with myriad tools and techniques and best practices, it is easy to get this part wrong. This book develops an AI workflow to help ensure success in building and deploying AI.

The AI workflow


Building and deploying AI should follow a workflow that respects the fact that the AI component fits in the larger context of pre-existing processes and use cases. The AI workflow may be characterized as a four step process:

  1. Characterize the problem, goal, and business case

  2. Develop a method for solving the problem

  3. Design a deployment strategy that integrates the AI component into existing workflows

  4. Design and implement a continuous evaluation methodology

To help you ensure the AI workflow is followed, we offer a checklist of considerations and questions to ask during each step of the workflow.

Characterize the problem

Given the excitement around AI, there is a risk of adding AI technology to a platform  just for the sake of not missing out on the next big thing. However, AI technology is usually one of the more complex components of a system, hence the hype surrounding AI and the promise of advanced new capabilities it supposedly brings. Due to its complexity, AI introduces potentially significant technical debt, that is, code complexity that is hard to manage and becomes even harder to eliminate. Often, the code must be written to message inputs to the AI into a form that meets its assumptions and constraints and to fix outputs for the AI's mistakes.

Engineers from Google published an article in 2014 titled Machine Learning: The High-Interest Credit Card of Technical Debt (https://ai.google/research/pubs/pub43146), in which they write:

In this paper, we focus on the system-level interaction between machine learning code and larger systems as an area where hidden technical debt may rapidly accumulate. At a system level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherwise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in large masses of "glue code" or calibration layers that can lock in assumptions. Changes in the external world may make models or input signals change behavior in unintended ways, ratcheting up maintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating as intended may be difficult without careful design.

Machine Learning: The High-Interest Credit Card of Technical Debt, D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, and M. Young, presented at the SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), 2014

They proceed to document several varieties of technical debt that often come with AI and machine learning (ML) technology and suggest mitigations that complement those covered in our AI workflow.

AI should address a business problem that is not solvable by conventional means. The risk of technical debt is too high (higher than many other kinds of software practices) to consider adding AI technology without a clear purpose.

The problem being addressed with AI should be known to be solvable. For example, until recent advances found in Amazon Echo and Google Home, speech recognition in a large and noisy room was not possible. A few years ago, it would have been foolish to attempt to build a product that required this capability.

The AI component should be well-defined and bounded. It should do one or a few tasks, and it should make use of established algorithms, such as those detailed in the following chapters. The AI should not be treated as an amorphous intelligent concierge that solves any problem, specified or unspecified. For example, our chatbot case study in Chapter 7, A Blueprint for Understanding Queries and Generating Responses, is intentionally designed to handle a small subset of possible questions from users. A chatbot that attempts to answer all questions, perhaps with some kind of continuous learning based on the conversations users have with it, is a chatbot that has a high chance of embarrassing its creators, as was the case with Microsoft's Tay chatbot (https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/).

In summary, the AI should solve a business problem, it should use established techniques that are known to be able to solve the problem, and it should have a well-defined and bounded role within the larger system.

Checklist

  • The AI solves a clearly stated business problem

  • The problem is known to be solvable by AI

  • The AI uses established techniques

  • The role of the AI within the larger system is clearly defined and bounded

Develop a method

After characterizing the problem to be solved, a method for solving the problem must be found or developed. In most cases, a business should not attempt to engage in a greenfield research project developing a novel way to solve the problem. Such research projects carry significant risk since an effective solution is not guaranteed within a reasonable time. Instead, one should prefer existing techniques.

This book covers several existing and proven techniques for a variety of tasks. Many of these techniques, such as planning engines, natural language part-of-speech tagging, and anomaly detection, are much less interesting to the AI research community than some newer methods, such as convolutional neural networks (CNN). But these older techniques are still quite useful. These techniques have "disappeared in the fabric," to use a phrase Dr. Reid Smith, Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and I wrote in an article for AI Magazine titled, Building AI Applications: Yesterday, Today, and Tomorrow in 2017 (https://www.aaai.org/ojs/index.php/aimagazine/article/view/2709) (Building AI Applications: Yesterday, Today, and Tomorrow, R. G. Smith and J. Eckroth, AI Magazine, vol. 38, no. 1, pp. 6–22, 2017). What is sometimes called the "AI Effect" is the notion that whatever has become commonplace is no longer AI but rather everyday software engineering (https://en.wikipedia.org/wiki/AI_effect). We should measure an AI technique's maturity by how "boring" it is perceived to be, such as boring, commonplace heuristic search and planning. Chapter 2, A Blueprint for Planning Cloud Infrastructure, solves a real-world problem with this kind of boring but mature AI.

Finally, when developing a method, one should also take care to identify computation and data requirements. Some methods, such as deep learning, require a significant amount of both. In fact, deep learning is virtually impossible without some high-end graphics processing units (GPU) and thousands to millions of examples for training. Often, open source libraries such as CoreNLP will include highly accurate pre-trained models so the challenge of acquiring sufficient data for training purposes can be avoided. In Chapter 5, A Blueprint for Detecting Your Logo in Social Media, we demonstrate a means of customizing a pre-trained model for a custom use case with what is known as "transfer learning."

Checklist

  • The method does not require significant new research

  • The method is relatively mature and commonplace

  • The necessary hardware resources and training data are available

Design a deployment strategy

Even the most intelligent AI may never be used. It is rare for people to change their habits even if there is an advantage in doing so. Finding a way to integrate a new AI tool into an existing workflow is just as important to the overall AI workflow as making a business case for the AI and developing it. Dr. Smith and I wrote:

Perhaps the most important lesson learned by AI system builders is that success depends on integrating into existing workflows — the human context of actual use. It is rare to replace an existing workflow completely. Thus, the application must play nicely with the other tools that people use. Put another way, ease of use delivered by the human interface is the "license to operate." Unless designers get that part right, people may not ever see the AI power under the hood; they will have already walked away.

Building AI Applications: Yesterday, Today, and Tomorrow, R. G. Smith and J. Eckroth, AI Magazine, vol. 38, no. 1, Page 16, 2017

Numerous examples of bad integrations exist. Consider Microsoft's "Clippy," a cartoon character that attempted to help users write letters and spell check their document. It was eventually removed from Microsoft Office (https://www.theatlantic.com/technology/archive/2015/06/clippy-the-microsoft-office-assistant-is-the-patriarchys-fault/396653/). While its assistance may have been useful, the problem seemed to be that Clippy was socially awkward, in a sense. Clippy asked if the user would like help at nearly all the wrong times:

Clippy suffered the dreaded "optimization for first-time use" problem. That is, the very first time you were composing a letter with Word, you might possibly be grateful for advice about how to use various letter-formatting features. The next billion times you typed "Dear..." and saw Clippy pop up, you wanted to scream.

(https://www.theatlantic.com/technology/archive/2008/04/-quot-clippy-quot-update-now-with-organizational-anthropology/8006/)

In a more recent example, most smartphone users do not use Apple Siri or Google Home, especially not in public (What can I help you with?: Infrequent users' experiences of intelligent personal assistants, B. R. Cowan, N. Pantidi, D. Coyle, K. Morrissey, P. Clarke, S. Al-Shehri, D. Earley, and N. Bandeira, presented at the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, New York, New York, USA, 2017, pp. 43–12). Changing social norms in order to increase adoption of a product is a significant marketing challenge. On the other hand, to "google" something, which clearly involves AI, is a sufficiently entrenched activity that it is defined as a verb in the Oxford English Dictionary ("Google, v.2'" OED Online, January 2018, Oxford University Press, http://www.oed.com/view/Entry/261961?rskey=yiwSeP&result=2&isAdvanced=false). Face recognition and automatic tagging on Facebook have been used by millions of people. And we click product recommendations on Amazon and other storefronts without a second thought. We have many everyday workflows that have evolved to include AI.

As a general rule, it is easier to ask users to make a small change to their habits if the payoff is large; and it is hard or impossible to ask users to make a large change to their habits or workflow if the payoff is small.

In addition to considering the user experience, effective deployment of AI also requires that one considers its placement within a larger system. What kinds of inputs are provided to the AI? Are they always in the right format? Does the AI have assumptions about these inputs that might not be met in extreme circumstances? Likewise, what kinds of outputs does the AI produce? Are these outputs always within established bounds? Is anything automated based on these outputs? Will an email be sent to customers based on the AI's decisions? Will a missile be fired?

As discussed in the preceding section, AI isn't everything, often a significant amount of code must be written around the AI component. The AI probably has strong assumptions about the kind of data it is receiving. For example, CNNs can only work on images of a specific, fixed size – larger or smaller images must be squished or stretched first. Most NLP techniques assume the text is written in a particular language; running part-of-speech tagging with an English model on a French text will produce bogus results.

If the AI gets bad input, or even if the AI gets good input, the results might be bad. What kind of checks are performed on the output to ensure the AI does not make your company look foolish? This question is particularly relevant if the AI's output feeds into an automated process such as sending alerts and emails, adding metadata to photos or posts, or even suggesting products. Most AI will connect to some automated procedure since the value added by AI is usually focused on its ability to automate some task. Ultimately, developers will need to ensure that the AI's outputs are accurate; this is addressed in the final step in the workflow, Design and implement a continuous evaluation. First, however, we provide a checklist for designing a deployment strategy.

Checklist

  • Plan a user experience, if the AI is user-facing, that fits into an existing habit or workflow, requiring very little change by the user

  • Ensure the AI adds significant value with minimal barriers to adoption

  • List the AI's assumptions or requirements about the nature (format, size, characteristics) of its inputs and outputs

  • Articulate boundary conditions on the AI's inputs and outputs, and develop a plan to either ignore or correct out-of-bounds and bogus inputs and outputs

  • List all the ways the AI's outputs are used to automate some task, and the potential impact bad output may have on that task, on a user's experience, and on the company's reputation

Design and implement a continuous evaluation

The fourth and final stage of the workflow concerns the AI after it is deployed. Presumably, during development, the AI has been trained and tested on a broad range of realistic inputs and shown to perform admirably. And then it is deployed. Why should anything change?

No large software, and certainly no AI system, has ever been tested on all possible inputs. Developing "adversarial" inputs, that is, inputs designed to break an AI system, is an entire subfield of AI with its own researchers and publications (https://en.wikipedia.org/wiki/Adversarial_machine_learning). Adversarial inputs showcase the limits of some of our AI systems and help us build more robust software.

However, even in non-adversarial cases, AI systems can degrade or break in various ways. According to The Guardian, YouTube's recommendation engine, which suggests the next video to watch, has begun showing extremist content next to kid-friendly videos. Advertisers for the benign videos are reasonably upset about unexpected brand associations with such content (https://www.theguardian.com/technology/2017/mar/25/google-youtube-advertising-extremist-content-att-verizon). Particularly when the AI is trained using a large corpus of example data, the AI can pick out statistical regularities in the data that do not accurately reflect our society. For example, some AI has been shown to be racist (https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses). The quality of the training set is usually to blame in these situations. When mature adults examine the data, they are able to bring a lifetime of experience to their interpretation. They understand that the data can be skewed due to various reasons based on a host of possible systemic biases in data collection, among other factors. However, feeding this data into an AI may well produce a kind of sociopathic AI that has no lifetime of experience to fall back on. Rather, the AI trusts the data with absolute assuredness. The data is all the AI knows unless additional checks and balances are added to the code.

The environments in which AI is deployed almost always change. Any AI deployed to humans will be subjected to an environment in constant evolution. The kinds of words used by people leaving product reviews will change over time ("far out," "awesome," "lit," and so on (https://blog.oxforddictionaries.com/2014/05/07/18-awesome-ways-say-awesome/)), as will their syntax (that is, Unicode smilies, emojis, "meme" GIFs). The kinds of photos people take of themselves and each other have changed from portraits, often taken by a bystander, to selfies, thus dramatically altering the perspective and orientation of faces in photos.

Any AI that becomes part of a person's workflow will be manipulated by that person. The user will attempt to understand how the AI behaves and then will gradually adjust the way they use the AI in order to get maximum benefit from it.

Fred Brooks, manager of IBM's System/360 effort and winner of the Turing Award, observed in his book The Mythical Man-Month that systems, just before deployment, exist in a metastable modality – any change to their operating environment or inputs could cause the system to collapse to a less functional state:

"Systems program building is an entropy-decreasing process, hence inherently metastable. Program maintenance is an entropy-increasing process, and even its most skillful execution only delays the subsidence of the system into unfixable obsolescence."

The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition, F.P. Brooks, Jr., Addison Wesley, 2/E. 1995, Page 123

Perhaps there is no escape from the inevitable obsolescence of every system. However, one could presumably delay this fate by continuously monitoring the system after it is deployed and revise or retrain the system in light of new data. This new data can be acquired from the system's actual operating environment rather than its expected operating environment, which is all one knows before it is deployed.

The following checklist may help system builders to design and implement a continuous evaluation methodology.

Checklist

  • Define performance metrics. These are often defined during system building and may be reused for continuous evaluation.

  • Write scripts that automate system testing according to these metrics. Create "regression" tests to ensure the cases the system solved adequately before are still solved adequately in the future.

  • Keep logs of all AI inputs and outputs if the data size is not unbearable or keep aggregate statistics if it is. Define alert conditions to detect degrading performance; for example, to detect whether the AI system is unusually producing the same output repeatedly.

  • Consider asking for feedback from users and aggregate this feedback in a place that is often reviewed. Read Chapter 3, A Blueprint for Making Sense of Feedback, for a smart way to handle feedback.

Overview of the chapters


The projects detailed in chapters 2 through 7 showcase multiple AI use cases and techniques. In each chapter, the AI workflow is addressed in the context of the specific project. The reader is encouraged not only to practice with the techniques and learn new coding skills but also to critically consider how the AI workflow might apply to new situations that are beyond the scope of this book.

Chapter 2, A Blueprint for Planning Cloud Infrastructure, shows how AI can be used in a planning engine to provide suggestions for optimal allocation of cloud computing resources. Often, AI and ML require significant computational time for training or processing. Today, cloud computing is a cost-effective option for these large computing jobs. Of course, cloud computing carries a certain cost in money and time. Depending on the job, tasks may be run in parallel on multiple cloud instances, thus significantly reducing time, but possibly increasing costs depending on how long the tasks take to start up on each instance.

This chapter shows how to use the open source OptaPlanner constraint solver planning engine to create a plan for cloud computing resources. This chapter develops a Java-based solution for the optimal number of cloud resources to complete the tasks in the shortest time and lowest budget. Benchmarks are detailed to show that the solution is accurate.

Chapter 3, A Blueprint for Making Sense of Feedback, shows how to acquire feedback from customers and the general public about a company's products and services, and how to identify the sentiment, or general mood, of the feedback for particular products, services, or categories. The Twitter and Reddit APIs are demonstrated for acquiring feedback. Two approaches are demonstrated for sentiment analysis: a dictionary-based approach and a method using ML with the CoreNLP library. The sentiment data is then visualized with plotly.js in a dashboard view for real-time updates.

Chapter 4, A Blueprint for Recommending Products and Services, shows how to build and deploy a recommendation engine for products and services. Given a history of all user activity (purchases, clicks, ratings), a system is designed that can produce appropriate recommendations for individual users. An overview of the relevant mathematics is included, and the Python implicit library is used for building the solution. A continuous evaluation methodology is detailed to ensure the recommender continues to provide appropriate recommendations after it is deployed.

Chapter 5, A Blueprint for Detecting Your Logo in Social Media, shows how to build a CNN to detect certain objects, such as products and logos, in other people's photos. Using the Python library TensorFlow, readers are shown how to take an existing pre-trained object recognition model such as Xception and refine it for detecting specific objects using a small training set of images. Then the Twitter and Reddit API codes from Chapter 3, A Blueprint for Making Sense of Feedback, are reused to acquire images from social media, and the detector is run on these images to pick out photos of interest. A short introduction to CNNs and deep learning is included.

Chapter 6, A Blueprint for Discovering Trends and Recognizing Anomalies, explains how to discover and track trends on a blog, storefront, or social media platform. Using statistical models and anomaly detection algorithms, the code is developed with the Python library scikit-learn. Different approaches in ML are compared to address different use cases.

Chapter 7, A Blueprint for Understanding Queries and Generating Responses, explains how to use the Rasa Python library and Prolog to build two custom chatbots that examine the user's question and construct an appropriate answer using natural language generation. The Prolog code helps us develop a logical reasoning agent that is able to answer complex questions. Each step of the AI workflow is addressed to help readers prepare to deploy the solution.

The final part of this book, Chapter 8, Preparing for Your Future and Surviving the Hype Cycle, examines the various highs and lows of interest in AL and ML over the last several decades. A case is made that AI has continued to improve and grow over all this time, but often the AI is behind the scenes or accepted as standard practice and thus not perceived as exciting. However, as long as a business case continues to exist for the AI solutions and the AI workflow is followed, the hype cycles should not impact the development of an effective solution. The chapter ends with advice on how to identify new approaches and advances in AI and how to decide whether or not these advances are relevant for real business need.

Summary


In this chapter, we saw that AI can play a crucial role in a larger system, a role that enables new functionality that can form the basis of a product or service. However, in practice, the AI component is actually quite small when compared to the code and time spent on surrounding issues such as the user interface, handling messy input and correcting bad outputs, and all the issues intrinsic to working in software teams larger than one. In any event, the AI component is also often the most complex part of an intelligent software system, and special care must be taken to get it right. We've introduced an AI workflow that ensures that the benefits of building and deploying AI have a hope of outweighing the costs of initial development and continued monitoring and maintenance. This chapter also introduced the projects that make up the bulk of this book.

In the next chapter, we follow the AI workflow with an AI cloud resource planning project that will prove to be useful in several other projects.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn and master the essential blueprints to program AI for real-world business applications
  • Gain insights into how modern AI and machine learning solve core business challenges
  • Acquire practical techniques and a workflow that can build AI applications using state-of-the-art software libraries
  • Work with a practical, code-based strategy for creating successful AI solutions in your business

Description

AI Blueprints gives you a working framework and the techniques to build your own successful AI business applications. You’ll learn across six business scenarios how AI can solve critical challenges with state-of-the-art AI software libraries and a well thought out workflow. Along the way you’ll discover the practical techniques to build AI business applications from first design to full coding and deployment. The AI blueprints in this book solve key business scenarios. The first blueprint uses AI to find solutions for building plans for cloud computing that are on-time and under budget. The second blueprint involves an AI system that continuously monitors social media to gauge public feeling about a topic of interest - such as self-driving cars. You’ll learn how to approach AI business problems and apply blueprints that can ensure success. The next AI scenario shows you how to approach the problem of creating a recommendation engine and monitoring how those recommendations perform. The fourth blueprint shows you how to use deep learning to find your business logo in social media photos and assess how people interact with your products. Learn the practical techniques involved and how to apply these blueprints intelligently. The fifth blueprint is about how to best design a ‘trending now’ section on your website, much like the one we know from Twitter. The sixth blueprint shows how to create helpful chatbots so that an AI system can understand customers’ questions and answer them with relevant responses. This book continuously demonstrates a working framework and strategy for building AI business applications. Along the way, you’ll also learn how to prepare for future advances in AI. You’ll gain a workflow and a toolbox of patterns and techniques so that you can create your own smart code.

What you will learn

An essential toolbox of blueprints and advanced techniques for building AI business applications How to design and deploy AI applications that meet today’s business needs A workflow from first design stages to practical code solutions in your next AI projects Solutions for AI projects that involve social media analytics and recommendation engines Practical projects and techniques for sentiment analysis and helpful chatbots A blueprint for AI projects that recommend products based on customer purchasing habits How to prepare yourself for the next decade of AI and machine learning advancements

Product Details

Country selected

Publication date : Dec 31, 2018
Length 250 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781788992879
Category :

What do you get with a Packt Subscription?

Free for first 7 days. $15.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details


Publication date : Dec 31, 2018
Length 250 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781788992879
Category :

Table of Contents

14 Chapters
AI Blueprints Chevron down icon Chevron up icon
Foreword Chevron down icon Chevron up icon
Contributors Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
1. The AI Workflow Chevron down icon Chevron up icon
2. A Blueprint for Planning Cloud Infrastructure Chevron down icon Chevron up icon
3. A Blueprint for Making Sense of Feedback Chevron down icon Chevron up icon
4. A Blueprint for Recommending Products and Services Chevron down icon Chevron up icon
5. A Blueprint for Detecting Your Logo in Social Media Chevron down icon Chevron up icon
6. A Blueprint for Discovering Trends and Recognizing Anomalies Chevron down icon Chevron up icon
7. A Blueprint for Understanding Queries and Generating Responses Chevron down icon Chevron up icon
8. Preparing for Your Futureand Surviving the Hype Cycle Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Top Reviews
No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.