Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Synthetic Data for Machine Learning
Synthetic Data for Machine Learning

Synthetic Data for Machine Learning: Revolutionize your approach to machine learning with this comprehensive conceptual guide

eBook
R$80 R$222.99
Paperback
R$278.99
Subscription
Free Trial
Renews at R$50p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Synthetic Data for Machine Learning

Machine Learning and the Need for Data

Machine learning (ML) is the crown jewel of artificial intelligence (AI) and has changed our lives forever. We cannot imagine our daily lives without ML tools and services such as Siri, Tesla, and others.

In this chapter, you will be introduced to ML. You will understand the main differences between non-learning and learning-based solutions. Then, you will see why deep learning (DL) models often achieve state-of-the-art results. Following this, you will get a brief introduction to how the training process is done and why large-scale training data is needed in ML.

In this chapter, we’re going to cover the following main topics:

  • AI, ML, and DL
  • Why are ML and DL so powerful?
  • Training ML models

Technical requirements

Any code used in this chapter will be available in the corresponding chapter folder in this book’s GitHub repository: https://github.com/PacktPublishing/Synthetic-Data-for-Machine-Learning.

We will be using PyTorch, which is a powerful ML framework developed by Meta AI.

Artificial intelligence, machine learning, and deep learning

In this section, we learn what exactly ML is. We will learn to differentiate between learning and non-learning AI. However, before that, we’ll introduce ourselves to AI, ML, and DL.

Artificial intelligence (AI)

There are different definitions of AI. However, one of the best is John McCarthy’s definition. McCarthy was the first to coin the term artificial intelligence in one of his proposals for the 1956 Dartmouth Conference. He defined the outlines of this field by many major contributions such as the Lisp programming language, utility computing, and timesharing. According to the father of AI in What is Artificial Intelligence? (https://www-formal.stanford.edu/jmc/whatisai.pdf):

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

AI is about making computers, programs, machines, or others mimic or imitate human intelligence. As humans, we perceive the world, which is a very complex task, and we reason, generalize, plan, and interact with our surroundings. Although it is fascinating to master these tasks within just a few years of our childhood, the most interesting aspect of our intelligence is the ability to improve the learning process and optimize performance through experience!

Unfortunately, we still barely scratch the surface of knowing about our own brains, intelligence, and other associated functionalities such as vision and reasoning. Thus, the trek of creating “intelligent” machines has just started relatively recently in civilization and written history. One of the most flourishing directions of AI has been learning-based AI.

AI can be seen as an umbrella that covers two types of intelligence: learning and non-learning AI. It is important to distinguish between AI that improves with experience and one that does not!

For example, let’s say you want to use AI to improve the accuracy of a physician identifying a certain disease, given a set of symptoms. You can create a simple recommendation system based on some generic cases by asking domain experts (senior physicians). The pseudocode for such a system is shown in the following code block:

//Example of Non-learning AI (My AI Doctor!)
Patient.age //get the patient age
Patient. temperature //get the patient temperature
Patient.night_sweats //get if the patient has night sweats
Paitent.Cough //get if the patient cough
// AI program starts
if Patient.age > 70:
    if Patient.temperature > 39 and Paitent.Cough:
        print("Recommend Disease A")
        return
elif Patient.age < 10:
    if Patient.tempreture > 37 and not Paitent.Cough:
        if Patient.night_sweats:
                print("Recommend Disease B")
                return
else:
    print("I cannot resolve this case!")
    return

This program mimics how a physician may reason for a similar scenario. Using simple if-else statements with few lines of code, we can bring “intelligence” to our program.

Important note

This is an example of non-learning-based AI. As you may expect, the program will not evolve with experience. In other words, the logic will not improve with more patients, though the program still represents a clear form of AI.

In this section, we learned about AI and explored how to distinguish between learning and non-learning-based AI. In the next section, we will look at ML.

Machine learning (ML)

ML is a subset of AI. The key idea of ML is to enable computer programs to learn from experience. The aim is to allow programs to learn without the need to dictate the rules by humans. In the example of the AI doctor we saw in the previous section, the main issue is creating the rules. This process is extremely difficult, time-consuming, and error-prone. For the program to work properly, you would need to ask experienced/senior physicians to express the logic they usually use to handle similar patients. In other scenarios, we do not know exactly what the rules are and what mechanisms are involved in the process, such as object recognition and object tracking.

ML comes as a solution to learning the rules that control the process by exploring special training data collected for this task (see Figure 1.1):

Figure 1.1 – ML learns implicit rules from data

Figure 1.1 – ML learns implicit rules from data

ML has three major types: supervised, unsupervised, and reinforcement learning. The main difference between them comes from the nature of the training data used and the learning process itself. This is usually related to the problem and the available training data.

Deep learning (DL)

DL is a subset of ML, and it can be seen as the heart of ML (see Figure 1.2). Most of the amazing applications of ML are possible because of DL. DL learns and discovers complex patterns and structures in the training data that are usually hard to do using other ML approaches, such as decision trees. DL learns by using artificial neural networks (ANNs) composed of multiple layers or too many layers (an order of 10 or more), inspired by the human brain; hence the neural in the name. It has three types of layers: input, output, and hidden. The input layer receives the input, while the output layer gives the prediction of the ANN. The hidden layers are responsible for discovering the hidden patterns in the training data. Generally, each layer (from the input to the output layers) learns a more abstract representation of the data, given the output of the previous layer. The more hidden layers your ANN has, the more complex and non-linear the ANN will be. Thus, ANNs will have more freedom to better approximate the relationship between the input and output or to learn your training data. For example, AlexNet is composed of 8 layers, VGGNet is composed of 16 to 19 layers, and ResNet-50 is composed of 50 layers:

Figure 1.2 – How DL, ML, and AI are related

Figure 1.2 – How DL, ML, and AI are related

The main issue with DL is that it requires a large-scale training dataset to converge because we usually have a tremendous number of parameters (weights) to tweak to minimize the loss. In ML, loss is a way to penalize wrong predictions. At the same time, it is an indication of how well the model is learning the training data. Collecting and annotating such large datasets is extremely hard and expensive.

Nowadays, using synthetic data as an alternative or complementary to real data is a hot topic. It is a trending topic in research and industry. Many companies such as Google (Google’s Waymo utilizes synthetic data to train autonomous cars) and Microsoft (they use synthetic data to handle privacy issues with sensitive data) started recently to invest in using synthetic data to train next-generation ML models.

Why are ML and DL so powerful?

Although most AI fields are flourishing and gaining more attention recently, ML and DL have been the most influential fields of AI. This is because of several factors that make them distinctly a better solution in terms of accuracy, performance, and applicability. In this section, we are going to look at some of these essential factors.

Feature engineering

In traditional AI, it is compulsory to design the features manually for the task. This process is extremely difficult, time-consuming, and task/problem-dependent. If you want to write a program, say to recognize car wheels, you probably need to use some filters to extract edges and corners. Then, you need to utilize these extracted features to identify the target object. As you may anticipate, it is not always easy to know what features to select or ignore. Imagine developing an AI-based solution to predict if a patient has COVID-19 based on a set of symptoms at the early beginning of the pandemic. At that time, human experts did not know how to answer such questions. ML and DL can solve such problems.

DL models learn to automatically extract useful features by learning hidden patterns, structures, and associations in the training data. A loss is used to guide the learning process and help the model achieve the objectives of the training process. However, for the model to converge, it needs to be exposed to sufficiently diverse training data.

Transfer across tasks

One strong advantage of DL is that it’s more task-independent compared to traditional ML approaches. Transfer learning is an amazing and powerful feature of DL. Instead of training the model from scratch, you can start the training process using a different model trained on a similar task. This is very common in fields such as computer vision and natural language processing. Usually, you have a small dataset of your own target task, and your model would not converge using only this small dataset. Thus, training the model on a dataset close to the domain (or the task) but that’s sufficiently more diverse and larger and then fine-tuning on your task-specific dataset gives better results. This idea allows your model to transfer the learning between tasks and domains:

Figure 1.3 – Advantages of ML and DL

Figure 1.3 – Advantages of ML and DL

Important note

If the problem is simple or a mathematical solution is available, then you probably do not need to use ML! Unfortunately, it is common to see some ML-based solutions proposed for problems where a clear explicit mathematical solution is already available! At the same time, it is not recommended to use ML if a simple rule-based solution works fine for your problem.

Training ML models

Developing an ML model usually requires performing the following essential steps:

  1. Collecting data.
  2. Annotating data.
  3. Designing an ML model.
  4. Training the model.
  5. Testing the model.

These steps are depicted in the following diagram:

Figure 1.4 – Developing an ML model process

Figure 1.4 – Developing an ML model process

Now, let’s look at each of the steps in more detail to better understand how we can develop an ML model.

Collecting and annotating data

The first step in the process of developing an ML model is collecting the needed training data. You need to decide what training data is needed:

  • Train using an existing dataset: In this case, there’s no need to collect training data. Thus, you can skip collecting and annotating data. However, you should make sure that your target task or domain is quite similar to the available dataset(s) you are planning to deploy. Otherwise, your model may train well on this dataset, but it will not perform well when tested on the new task or domain.
  • Train on an existing dataset and fine-tune on a new dataset: This is the most popular case in today’s ML. You can pre-train your model on a large existing dataset and then fine-tune it on the new dataset. Regarding the new dataset, it does not need to be very large as you are already leveraging other existing dataset(s). For the dataset to be collected, you need to identify what the model needs to learn and how you are planning to implement this. After collecting the training data, you will begin the annotation process.
  • Train from scratch on new data: In some contexts, your task or domain may be far from any available datasets. Thus, you will need to collect large-scale data. Collecting large-scale datasets is not simple. To do this, you need to identify what the model will learn and how you want it to do that. Making any modifications to the plan later may require you to recollect more data or even start the data collection process again from scratch. Following this, you need to decide what ground truth to extract, the budget, and the quality you want.

Next, we’ll explore the most essential element of an ML model development process. So, let’s learn how to design and train a typical ML model.

Designing and training an ML model

Selecting a suitable ML model for the problem a hand is dependent on the problem itself, any constraints, and the ML engineer. Sometimes, the same problem can be solved by different ML algorithms but in other scenarios, it is compulsory to use a specific ML model. Based on the problem and ML model, data should be collected and annotated.

Each ML algorithm will have a different set of hyperparameters, various designs, and a set of decisions to be made throughout the process. It is recommended that you perform pilot or preliminary experiments to identify the best approach for your problem.

When the design process is finalized, the training process can start. For some ML models, the training process could take minutes, while for others, it could take weeks, months, or more! You may need to perform different training experiments to decide which training hyperparameters you are going to continue with – for example, the number of epochs or optimization techniques. Usually, the loss will be a helpful indication of how well the training process is going. In DL, two losses are used: training and validation loss. The first tells us how well the model is learning the training data, while the latter describes the ability of the model to generalize to new data.

Validating and testing an ML model

In ML, we should differentiate between three different datasets/partitions/sets: training, validation, and testing. The training set is used to teach the model about the task and assess how well the model is performing in the training process. The validation set is a proxy of the test set and is used to tell us the expected performance of our model on new data. However, the test set is the proxy of the actual world – that is, where our model will be tested. This dataset should only be deployed so that we know how the model will perform in practice. Using this dataset to change a hyperparameter or design option is considered cheating because it gives a deceptive understanding of how your model will be performing or generalizing in the real world. In the real world, once your model has been deployed, say for example in industry, you will not be able to tune the model’s parameters based on its performance!

Iterations in the ML development process

In practice, developing an ML model will require many iterations between validation and testing and the other stages of the process. It could be that validation or testing results are unsatisfactory and you decide to change some aspects of the data collection, annotation, designing, or training.

Summary

In this chapter, we discussed the terms AI, ML, and DL. We uncovered some advantages of ML and DL. At the same time, we learned the basic steps for developing and training ML models. Finally, we learned why we need large-scale training data.

In the next chapter, we will discover the main issues with annotating large-scale datasets. This will give us a good understanding of why synthetic data is the future of ML!

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Avoid common data issues by identifying and solving them using synthetic data-based solutions
  • Master synthetic data generation approaches to prepare for the future of machine learning
  • Enhance performance, reduce budget, and stand out from competitors using synthetic data
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

The machine learning (ML) revolution has made our world unimaginable without its products and services. However, training ML models requires vast datasets, which entails a process plagued by high costs, errors, and privacy concerns associated with collecting and annotating real data. Synthetic data emerges as a promising solution to all these challenges. This book is designed to bridge theory and practice of using synthetic data, offering invaluable support for your ML journey. Synthetic Data for Machine Learning empowers you to tackle real data issues, enhance your ML models' performance, and gain a deep understanding of synthetic data generation. You’ll explore the strengths and weaknesses of various approaches, gaining practical knowledge with hands-on examples of modern methods, including Generative Adversarial Networks (GANs) and diffusion models. Additionally, you’ll uncover the secrets and best practices to harness the full potential of synthetic data. By the end of this book, you’ll have mastered synthetic data and positioned yourself as a market leader, ready for more advanced, cost-effective, and higher-quality data sources, setting you ahead of your peers in the next generation of ML.

Who is this book for?

If you are a machine learning (ML) practitioner or researcher who wants to overcome data problems, this book is for you. Basic knowledge of ML and Python programming is required. The book is one of the pioneer works on the subject, providing leading-edge support for ML engineers, researchers, companies, and decision makers.

What you will learn

  • Understand real data problems, limitations, drawbacks, and pitfalls
  • Harness the potential of synthetic data for data-hungry ML models
  • Discover state-of-the-art synthetic data generation approaches and solutions
  • Uncover synthetic data potential by working on diverse case studies
  • Understand synthetic data challenges and emerging research topics
  • Apply synthetic data to your ML projects successfully

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 27, 2023
Length: 208 pages
Edition : 1st
Language : English
ISBN-13 : 9781803232607
Category :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Oct 27, 2023
Length: 208 pages
Edition : 1st
Language : English
ISBN-13 : 9781803232607
Category :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
R$50 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
R$500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts
R$800 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total R$ 836.97
Synthetic Data for Machine Learning
R$278.99
Python Deep Learning
R$278.99
Machine Learning for Imbalanced Data
R$278.99
Total R$ 836.97 Stars icon

Table of Contents

24 Chapters
Part 1:Real Data Issues, Limitations, and Challenges Chevron down icon Chevron up icon
Chapter 1: Machine Learning and the Need for Data Chevron down icon Chevron up icon
Chapter 2: Annotating Real Data Chevron down icon Chevron up icon
Chapter 3: Privacy Issues in Real Data Chevron down icon Chevron up icon
Part 2:An Overview of Synthetic Data for Machine Learning Chevron down icon Chevron up icon
Chapter 4: An Introduction to Synthetic Data Chevron down icon Chevron up icon
Chapter 5: Synthetic Data as a Solution Chevron down icon Chevron up icon
Part 3:Synthetic Data Generation Approaches Chevron down icon Chevron up icon
Chapter 6: Leveraging Simulators and Rendering Engines to Generate Synthetic Data Chevron down icon Chevron up icon
Chapter 7: Exploring Generative Adversarial Networks Chevron down icon Chevron up icon
Chapter 8: Video Games as a Source of Synthetic Data Chevron down icon Chevron up icon
Chapter 9: Exploring Diffusion Models for Synthetic Data Chevron down icon Chevron up icon
Part 4:Case Studies and Best Practices Chevron down icon Chevron up icon
Chapter 10: Case Study 1 – Computer Vision Chevron down icon Chevron up icon
Chapter 11: Case Study 2 – Natural Language Processing Chevron down icon Chevron up icon
Chapter 12: Case Study 3 – Predictive Analytics Chevron down icon Chevron up icon
Chapter 13: Best Practices for Applying Synthetic Data Chevron down icon Chevron up icon
Part 5:Current Challenges and Future Perspectives Chevron down icon Chevron up icon
Chapter 14: Synthetic-to-Real Domain Adaptation Chevron down icon Chevron up icon
Chapter 15: Diversity Issues in Synthetic Data Chevron down icon Chevron up icon
Chapter 16: Photorealism in Computer Vision Chevron down icon Chevron up icon
Chapter 17: Conclusion Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(11 Ratings)
5 star 63.6%
4 star 27.3%
3 star 0%
2 star 9.1%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




tt0507 Nov 30, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book on synthetic data is praised for its extensive coverage of the topic, emphasizing its relevance in the AI and machine learning landscape. It starts with a solid introduction to the challenges of using real data, making it accessible for both beginners and experienced practitioners. The main sections explore various synthetic data generation methods, including GANs and simulation-based approaches like video games. The book provides theoretical explanations, practical examples, and case studies, offering a comprehensive guide for readers seeking to leverage synthetic data effectively. It is hailed as a game-changer for machine learning professionals, addressing common challenges and providing innovative solutions, making it a must-read for those wanting to stay at the forefront of the ML revolution.
Amazon Verified review Amazon
H2N Nov 29, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is a nice guide using Python about intricacies of synthetic data in ML. Across 17 chapters, it discussed from ML fundamentals to the nuances of data privacy. The book also talked about ethical considerations in synthetic data and showcases its practical application in fields like computer vision and predictive analytics.
Amazon Verified review Amazon
Steven Fernandes Dec 28, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A pivotal guide that delves into the intricate world of synthetic data and its application in machine learning (ML). The book provides a thorough understanding of real data issues, including limitations and pitfalls, and highlights the immense potential of synthetic data for data-hungry ML models. It introduces readers to cutting-edge methods for generating synthetic data and offers insights through diverse case studies. The book also explores the challenges and emerging research topics in synthetic data, equipping readers with the knowledge to successfully apply synthetic data in their ML projects, making it an invaluable resource for ML practitioners and enthusiasts.
Amazon Verified review Amazon
Pratik Bhavsar Dec 22, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
There is a hot debate in the market ❗If humans are trained with human generated data.There is nothing wrong about training AI with AI generated data!Discover the future of Machine Learning with the latest book "𝗦𝘆𝗻𝘁𝗵𝗲𝘁𝗶𝗰 𝗗𝗮𝘁𝗮 𝗳𝗼𝗿 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴" by synthetic data researcher Abdulrahman Kerim!About the bookIn a world where machine learning has become a game-changer, the struggle to acquire large datasets is real – costly, error-prone, and laden with privacy concerns. The book addresses these challenges head-on, offering practical insights into the world of synthetic data generation.What sets it apart- All types of data: Explains approaches for text, image, numerical, RLHF data.- Cutting-edge techniques: Master the art of synthetic data generation to future-proof your machine learning endeavors.- Real-world Solutions: Learn to tackle genuine data issues using synthetic data-based solutions.- Practical benefits: Enhance model performance, cut costs, and gain a competitive edge in your field.What you'll explore- Data Realities: Understand the nuances, limitations, and pitfalls associated with real data.- In-depth Solutions: Discover state-of-the-art synthetic data generation approaches through hands-on case studies.
Amazon Verified review Amazon
Dror Nov 29, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Deep learning (DL) has taken the world of AI and machine learning (ML) by storm. While research on more efficient approaches for AI does exist in academia and large research labs, the leading paradigm for successful real-world applications of DL remains supervised learning, or learning from labeled data. Deep neural networks trained using supervised learning are extremely powerful, but typically require very large amounts of labeled data, which is costly and time-consuming to acquire.This unique book provides a broad and detailed coverage of using synthetic data for training ML and DL systems. Generation and usage of synthetic data are believed to play an increasingly important role in training and building large neural networks in the future, and this book is a unique and practical guide to understanding the usage and generation of synthetic data for building AI systems effectively and efficiently.The book begins with a concise introduction to the challenges of acquiring and using real data for building ML systems, followed by a clear introduction to using synthetic data for ML. The main section of the book deals with synthetic data generation methods, such as using rendering engines and simulators, generative adversarial networks (GANs), video games, and diffusion models (DDPMs) for synthetic data generation. The last part of the book provides helpful examples of related case studies, as well as best practices and potential challenges of using synthetic data in ML. The accompanying GitHub repo is also helpful in reinforcing the materials and concepts presented in the book.This book will benefit any data scientist, researcher, or machine learning practitioner who develops ML/DL models and wants to learn how to leverage synthetic data generation. Some understanding of machine learning and deep learning concepts, as well as basic familiarity with the Python programming language, are all you need to use and benefit from this practical guide.Highly recommended!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.