Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Solr Cookbook - Third Edition
Solr Cookbook - Third Edition

Solr Cookbook - Third Edition: Solve real-time problems related to Apache Solr 4.x and 5.0 effectively with the help of over 100 easy-to-follow recipes , Third Edition

eBook
$28.99 $32.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Solr Cookbook - Third Edition

Chapter 2. Indexing Your Data

In this chapter, we will cover the following topics:

  • Indexing PDF files
  • Counting the number of fields
  • Using parsing update processors to parse data
  • Using scripting update processors to modify documents
  • Indexing data from a database using Data Import Handler
  • Incremental imports with DIH
  • Transforming data when using DIH
  • Indexing multiple geographical points
  • Updating document fields
  • Detecting the document language during indexation
  • Optimizing the primary key indexation
  • Handling multiple currencies

Introduction

Indexing data is one of the most crucial things in Lucene and Solr deployment. When your data is not indexed properly, your search results will be poor. When the search results are poor, it's almost certain the users will not be satisfied with the application that uses Solr. This is why we need our data to be prepared and indexed as timely and correctly as possible.

On the other hand, preparing data is not an easy task. Nowadays, we have more and more data floating around. We need to index multiple formats of data from multiple sources. Do we need to parse the data manually and prepare the data in XML format? The answer is no; we can let Solr do this for us. This chapter will concentrate on the indexing process and data preparation, starting with how to index data that is a binary PDF file to how to use Data Import Handler to fetch data from database and index it with Apache Solr and describing how we can detect the document language during indexation. We will also learn...

Indexing PDF files

The library on the corner, we used to go to, wants to expand its collection and become available for the wider public through the World Wide Web. It asked its book suppliers to provide sample chapters of all the books in PDF format so that they can share it with online users. With all the samples provided by the supplier comes a problem—how to extract data for the search box from more than 900,000 PDF files. Solr can do it with the use of Apache Tika (http://tika.apache.org/). This recipe will show you how to handle such a task.

How to do it...

To index PDF files, we will need to set up Solr to use extracting request handlers. To do this, we will take the following steps:

  1. First, let's edit our Solr instance, solrconfig.xml, and add the following configuration:
    <requestHandler name="/update/extract" class="solr.extraction.ExtractingRequestHandler">
     <lst name="defaults">
      <str name="fmap.content">text&lt...

Counting the number of fields

Imagine a situation where we have a simple document to be indexed to Solr with titles and tags. What we will want to do is separate the premium documents that have more tag values because they are better in terms of our business. Of course, we can count the number of tags ourselves, but why not let Solr do this? This recipe will show you how to do this with Solr.

How to do it...

Let's look at the steps we need to take to count the number of field values.

  1. We start with the index structure. What we need to do is put the following section in the schema.xml file:
    <field name="id" type="string" indexed="true" stored="true" required="true" />
    <field name="title" type="text_general" indexed="true" stored="true"/>
    <field name="tags" type="string" indexed="true" stored="true" multiValued="true"/>
    <field name...

Using parsing update processors to parse data

Let's assume that we are running a bookstore, we want to sort our books by the publication date, and run faceting on the number of likes each book gets. However, we get all our data in XML, and we don't have data in the proper format, and so on. The good thing is that we can tell Solr to parse our data property so that we don't have to change what we already have. This recipe will show you how to do this.

Getting ready

Before continuing with this recipe, I suggest reading the Counting the number of fields recipe of this chapter to get used to updating the request processor configuration.

How to do it...

Let's look at the steps we need to take to make data parsing work.

  1. First, we need to prepare our index structure, so we add the following section to the schema.xml file:
    <field name="id" type="string" indexed="true" stored="true" required="true" />
    <field name="title...

Using scripting update processors to modify documents

Sometimes, we need to modify documents during indexing, and we don't want to do this on the indexing application side. For example, we have documents describing the Internet sites. What we want to be able to do is filter the sites on the basis of the protocol used, for example, http or https. We don't have this information; we only have the whole URL address. Let's see how we can achieve this with Solr.

Getting ready

Before continuing with the following recipe, I suggest reading the Counting the number of fields recipe of this chapter to get used to updating request processor configuration.

How to do it...

The following steps will take you through the process of achieving our goal:

  1. First, we start with the index structure, putting the following section in the schema.xml file:
    <field name="id" type="string" indexed="true" stored="true" required="true" />
    <field name=...

Indexing data from a database using Data Import Handler

One of our clients has a problem. His database of users grows to such a size that even a simple SQL select takes too much time, and he seeks how to improve the search times. Of course, he has heard about Solr, but he doesn't want to generate XML or any other data format and push it to Solr; he would like the data to be fetched. What can we do about it? Well, there is one thing—we can use one of the contribute modules of Solr, which is the Data Import Handler. This task will show you how to configure the basic setup of the Data Import Handler and how to use it.

How to do it...

Let's assume that we have a database table. To select users from our table, we use the following SQL query:

SELECT user_id, user_name FROM users

The response might look like this:

| user_id | user_name     |
| 1       | John Kowalski |
| 2       | Amanda Looks  |

We also have a second table called users_description, where we store the descriptions of...

Introduction


Indexing data is one of the most crucial things in Lucene and Solr deployment. When your data is not indexed properly, your search results will be poor. When the search results are poor, it's almost certain the users will not be satisfied with the application that uses Solr. This is why we need our data to be prepared and indexed as timely and correctly as possible.

On the other hand, preparing data is not an easy task. Nowadays, we have more and more data floating around. We need to index multiple formats of data from multiple sources. Do we need to parse the data manually and prepare the data in XML format? The answer is no; we can let Solr do this for us. This chapter will concentrate on the indexing process and data preparation, starting with how to index data that is a binary PDF file to how to use Data Import Handler to fetch data from database and index it with Apache Solr and describing how we can detect the document language during indexation. We will also learn how...

Indexing PDF files


The library on the corner, we used to go to, wants to expand its collection and become available for the wider public through the World Wide Web. It asked its book suppliers to provide sample chapters of all the books in PDF format so that they can share it with online users. With all the samples provided by the supplier comes a problem—how to extract data for the search box from more than 900,000 PDF files. Solr can do it with the use of Apache Tika (http://tika.apache.org/). This recipe will show you how to handle such a task.

How to do it...

To index PDF files, we will need to set up Solr to use extracting request handlers. To do this, we will take the following steps:

  1. First, let's edit our Solr instance, solrconfig.xml, and add the following configuration:

    <requestHandler name="/update/extract" class="solr.extraction.ExtractingRequestHandler">
     <lst name="defaults">
      <str name="fmap.content">text</str>
      <str name="lowernames">true</str...

Counting the number of fields


Imagine a situation where we have a simple document to be indexed to Solr with titles and tags. What we will want to do is separate the premium documents that have more tag values because they are better in terms of our business. Of course, we can count the number of tags ourselves, but why not let Solr do this? This recipe will show you how to do this with Solr.

How to do it...

Let's look at the steps we need to take to count the number of field values.

  1. We start with the index structure. What we need to do is put the following section in the schema.xml file:

    <field name="id" type="string" indexed="true" stored="true" required="true" />
    <field name="title" type="text_general" indexed="true" stored="true"/>
    <field name="tags" type="string" indexed="true" stored="true" multiValued="true"/>
    <field name="tags_count" type="int" indexed="true" stored="true"/>
  2. The next thing is our test data, which looks as follows:

    <add>
     <doc>
      &lt...

Using parsing update processors to parse data


Let's assume that we are running a bookstore, we want to sort our books by the publication date, and run faceting on the number of likes each book gets. However, we get all our data in XML, and we don't have data in the proper format, and so on. The good thing is that we can tell Solr to parse our data property so that we don't have to change what we already have. This recipe will show you how to do this.

Getting ready

Before continuing with this recipe, I suggest reading the Counting the number of fields recipe of this chapter to get used to updating the request processor configuration.

How to do it...

Let's look at the steps we need to take to make data parsing work.

  1. First, we need to prepare our index structure, so we add the following section to the schema.xml file:

    <field name="id" type="string" indexed="true" stored="true" required="true" />
    <field name="title" type="text_general" indexed="true" stored="true" />
    <field name="published...

Using scripting update processors to modify documents


Sometimes, we need to modify documents during indexing, and we don't want to do this on the indexing application side. For example, we have documents describing the Internet sites. What we want to be able to do is filter the sites on the basis of the protocol used, for example, http or https. We don't have this information; we only have the whole URL address. Let's see how we can achieve this with Solr.

Getting ready

Before continuing with the following recipe, I suggest reading the Counting the number of fields recipe of this chapter to get used to updating request processor configuration.

How to do it...

The following steps will take you through the process of achieving our goal:

  1. First, we start with the index structure, putting the following section in the schema.xml file:

    <field name="id" type="string" indexed="true" stored="true" required="true" />
    <field name="url" type="text_general" indexed="true" stored="true"/>
    <field...

Indexing data from a database using Data Import Handler


One of our clients has a problem. His database of users grows to such a size that even a simple SQL select takes too much time, and he seeks how to improve the search times. Of course, he has heard about Solr, but he doesn't want to generate XML or any other data format and push it to Solr; he would like the data to be fetched. What can we do about it? Well, there is one thing—we can use one of the contribute modules of Solr, which is the Data Import Handler. This task will show you how to configure the basic setup of the Data Import Handler and how to use it.

How to do it...

Let's assume that we have a database table. To select users from our table, we use the following SQL query:

SELECT user_id, user_name FROM users

The response might look like this:

| user_id | user_name     |
| 1       | John Kowalski |
| 2       | Amanda Looks  |

We also have a second table called users_description, where we store the descriptions of users. The SQL query...

Incremental imports with DIH


In most use cases, indexing the data from scratch during every indexation doesn't make sense. Why index your 1,00,000 documents when only 1,000 were modified or added? This is where the Solr Data Import Handler delta queries come in handy. Using them, we can index our data incrementally. This recipe will show you how to set up the Data Import Handler to use delta queries and index data in an incremental way.

Getting ready

Refer to the Indexing data from a database using Data Import Handler recipe in this chapter to get to know the basics of the Data Import Handler configuration. I assume that Solr is set up according to the description given in the mentioned recipe.

How to do it...

We will reuse parts of the configuration shown in the Indexing data from a database using Data Import Handler recipe in this chapter, and we will modify it. Execute the following steps:

  1. The first thing you should do is add an additional column to the tables you use, a column that will specify...

Left arrow icon Right arrow icon

Description

This book is for intermediate Solr Developers who are willing to learn and implement Pro-level practices, techniques, and solutions. This edition will specifically appeal to developers who wish to quickly get to grips with the changes and new features of Apache Solr 5.

What you will learn

  • Acquire the skills needed to index your data in different formats, forms, and sources
  • Overcome common problems while analyzing your data
  • Use the faceting mechanism to get aggregated information about your data
  • Improve your Solr instance and Solr cluster performance
  • Get to know how to configure and use SolrCloud
  • Make use of the highlighting and document grouping functionalities
  • Diagnose and resolve problems with Solr instances and clusters
  • Implement different autocomplete functionalities

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 23, 2015
Length: 356 pages
Edition : 3rd
Language : English
ISBN-13 : 9781783553150
Vendor :
Apache
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jan 23, 2015
Length: 356 pages
Edition : 3rd
Language : English
ISBN-13 : 9781783553150
Vendor :
Apache
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 148.97
Solr Cookbook - Third Edition
$54.99
Apache Solr Search Patterns
$54.99
Apache Solr for Indexing Data
$38.99
Total $ 148.97 Stars icon

Table of Contents

11 Chapters
1. Apache Solr Configuration Chevron down icon Chevron up icon
2. Indexing Your Data Chevron down icon Chevron up icon
3. Analyzing Your Text Data Chevron down icon Chevron up icon
4. Querying Solr Chevron down icon Chevron up icon
5. Faceting Chevron down icon Chevron up icon
6. Improving Solr Performance Chevron down icon Chevron up icon
7. In the Cloud Chevron down icon Chevron up icon
8. Using Additional Functionalities Chevron down icon Chevron up icon
9. Dealing with Problems Chevron down icon Chevron up icon
10. Real-life Situations Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8
(6 Ratings)
5 star 16.7%
4 star 50%
3 star 33.3%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Markus Klose Mar 11, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This question is answered by Rafal Kuc in the current version of his Apache Solr cookbook.In "Solr Cookbook Third Edition" he describes typical problems, use cases and their solutions.The book is written for developers who already have background knowledge on Apache Solr.The structure of the book and its chapters provides a fast and efficient way of reading. You can either read the book from the beginning to the end or select a specific chapter without encountering any trouble. There are only a few dependencies between a few chapters. If there are any, the author explicitly points them out.The book is divided into ten chapters and covers important topics such as "Solr configuration", "performance optimization" or "SolrCloud".Each of these chapters describe several issues and how to deal with them. The structure of such an issue is uniform throughout the book and makes it easy getting along. The initial description of the problem or the scenario is followed by the step by step solution with Apache Solr. The author does not stop here, but continues with a detailed and sophisticated description of the background.In the description of the problem the author uses simple sample data and describes the solution based on it. This allows quite a simple recreation of the problem and also an understanding of the solution.The problems and solutions collected in this book range from simple configurations to more complex scenarios that are encountered again and again when building web applications with Apache Solr. The recently released version of Solr 5.0 is taken into account within the third edition of this book.Many of the described use cases are found in one form or another already answered in forums or mailing lists. But for me, and I consider myself an experienced Apache Solr user, there was a lot to discover. I saw some new and interesting approaches in this book, which I will try in my next projects.The book is a fine collection of everyday problems and saves you the hassle of searching for a solution in the world wide web.Conclusion: This book is not an introduction to the Apache Solr and therefore it is not suitable for beginners. However it is a great reference book, which offers practical solutions to everyday problems with Apache Solr.I recommend this book to everyone who deals with Apache Solr to read this book as a supplement to the relevant documentation of Apache Solr.
Amazon Verified review Amazon
Recendo Jun 18, 2015
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Eine stark lösungs-orientierte und angenehme Systematik zieht sich durch das ganze Buch: (1) Aufgabe - (2) Umgesetzte Lösung als Code-Fragment - (3) Erklärung der Lösung. Wer im Inhalt(s-Verzeichnis) das findet, was er sucht, wird gut bedient.Die Lösungen beziehen sich auf die 4er- Solr Versionen. Die derzeit aktuelle Version Solr 5 ist, nach Aussage des Autors, immerhin als Beta-Version für Kompatibilitäts-Tests berücksichtigt worden, auf zusätzliche Version-5 Features geht er jedoch nicht ein.Der Inhalt des Buchs wendet sich hauptsächlich an Admins oder Dev-Ops, weniger an Data-Scientists. Bspw. wird das Thema Solr-Cloud gut abgedeckt, das Thema Daten-Clustering hingegen wird nicht berührt.Rein inhaltlich bewertend, möchte ich dem Buch 'vier bis fünf' Sterne geben.Jedoch! Der vollständige Copy-Schutz des Buchs behindert ein leichtes und fehlerfreies Arbeiten mit der elektronischen Version dieses Buchs. Ständig war ich versucht, die (teilweise über zwei Seiten gehenden) Code-Fragmente zu kopieren, und direkt in die eigene Solr-Konfiguration zu übertragen. Geht nicht. Punkt. Und da auch der separate Download des Codes (von der Verlags-Site), nicht "1 zu 1" der Buch-Vorlage entspricht, wird ein schnelles und fehlerbefreites Arbeiten erfolgreich behindert.Hätte ja Verständnis, wenn allein der begleitende Buch-Text geschützt wäre. Das kann aber kaum für Code-Fragmente in einem IT-Arbeits-Buch gelten ... dass einem, als ein bezahlender Leser, derartige Erschwernisse in den Weg gelegt werden, ist schlicht grotesk. So er sich nicht zu nicht-legalen Methoden der DRM-Entfernung oder Umwegen verführen lassen möchte -> wird der Leser faktisch gezwungen, eine Technik des 19. Jahrhunderts ( das manuelle Abschreiben ) - in einer Arbeitsumgebung des 21 Jahrhunderts ( Copy & Paste ) - zu verwenden. IMHO, das macht nicht-glücklich, überhaupt nicht!Eigentlich sollte man durch die Vergabe nur eines einzelnen Bewertung-Sterns ein deutlicheres Zeichen gegen diese (meines Erachtens) käufer-missachtende Form der Durchsetzung des Kopierschutzes setzen. Allerdings, es war meine Entscheidung, die Kindle-Version zu kaufen. Und damit zugleich auch modernere Erwartungen und Maßstäbe, als an ein Produkt des 16. Jahrhunderts, zu stellen. Deswegen, der Fairness und Achtung gegenüber Werk und Autor wegen, und auch der Hoffnung halber, dass andere Rezendenten ebenfalls deutliche Worte gegenüber derart problematischen Formen der Durchsetzung des DRM einbringen werden, ziehe ich der gekauften Kindle-Version, lediglich 0,5 Sterne ab.
Amazon Verified review Amazon
NOTiFY Apr 06, 2016
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Bought it prior to going on the Sematext Core Solr – 2 Day Workshop in London (April 2016) which the author Rafal Kuc is the trainer.I like the "Cookbook" format as it allows you to go directly to the problem/issue you're attempting to solve/implement. The book got me started with Solr and had my database imported, indexed and was searching it within a few hours. Found it very to useful to have (skim) read it prior to course.I recommend the book and the attending the Sematext Core Solr workshop.
Amazon Verified review Amazon
Dale Brooks Aug 02, 2015
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
very pleased with the additional details and practical experience points that I found in this book, above and beyond the standard Apache documentation
Amazon Verified review Amazon
DJ Apr 26, 2015
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
I am a big fan of the "cookbook" format, and have several "cookbooks" on other technologies, that I refer to often. The format is a little different then the other cookbooks I am used to, instead of a the "problem", "solution", "discussion", and "see also" format; this has a longer "scenario" which isn't as clear and concise as an ORA cookbook, a "getting ready" which often sends you out to read other material, then a "how to do it", "How it works", and then a "see also". The problems/scenarios are not as clear as other cookbooks.As with most cookbooks this is not really a book you would read cover to cover but flip through as you encounter issues. But since I planned to review the book, I started from the beginning, and was turned off initially. The problems seemed to be more one off niche types of problems, I almost stopped reading. However, as I flipped deeper into the book and found better content that was more relevant to what I need using Solr. In some places the content seemed to be kind of forced into a cookbook format, such as a "recipe" called "Understanding and using the Lucene query language"So while there was some good content in the book, don't let the beginning of the book deter you, it did not match the expectations I would have for a "cookbook". Make sure you read the table of contents before purchasing, and if you can look at some of the recipes in the later portion of the book. There are definitely some good examples that could save you some time, and some examples to get you familiar with patterns in using Solr.Disclosure: I was provided a free version of the book for review.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.