Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Scaling Big Data with Hadoop and Solr, Second Edition
Scaling Big Data with Hadoop and Solr, Second Edition

Scaling Big Data with Hadoop and Solr, Second Edition: Understand, design, build, and optimize your big data search engine with Hadoop and Apache Solr

Arrow left icon
Profile Icon Vijay Karambelkar
Arrow right icon
AU$24.99 per month
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3 (4 Ratings)
Paperback Apr 2015 166 pages 1st Edition
eBook
AU$14.99 AU$53.99
Paperback
AU$67.99
Subscription
Free Trial
Renews at AU$24.99p/m
Arrow left icon
Profile Icon Vijay Karambelkar
Arrow right icon
AU$24.99 per month
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3 (4 Ratings)
Paperback Apr 2015 166 pages 1st Edition
eBook
AU$14.99 AU$53.99
Paperback
AU$67.99
Subscription
Free Trial
Renews at AU$24.99p/m
eBook
AU$14.99 AU$53.99
Paperback
AU$67.99
Subscription
Free Trial
Renews at AU$24.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $24.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Scaling Big Data with Hadoop and Solr, Second Edition

Chapter 2. Understanding Apache Solr

In the previous chapter, we discussed how big data has evolved to cater to the needs of various organizations, in order to deal with a humongous data size. There are many other challenges while working with data of different shapes. For example, the log files of any application server have semi-structured data or Microsoft Word documents, making it difficult to store the data in traditional relational storage. The challenge to handling such data is not just related to storage: there is also the big question of how to access the required information. Enterprise search engines are designed to address this problem.

Today, finding the required information within a specified timeframe has become more crucial than ever. Enterprises without information retrieval capabilities suffer from problems such as lost productivity of employees, poor decisions based on faulty/incomplete information, duplicated efforts, and so on. Given these scenarios, it is...

Setting up Apache Solr

We will be going through the Apache Solr architecture in the next section; for now, let's install Apache Solr on our machines. Apache Solr is a Java Servlet web application that runs on Apache Lucene, Tika, and other open source libraries. Apache Solr ships with a demo server on jetty, so one can simply run it through the command line. This helps users to run the Solr instance quickly. However, you can choose to customize it and deploy it in your own environment. Apache Solr does not ship with any installer; it has to be run as a part of J2EE Application.

Prerequisites for setting up Apache Solr

Apache Solr requires Java 1.6 or more to run, so it is important to make sure you have the correct version of Java by calling java –version, as shown in the following screenshot:

Prerequisites for setting up Apache Solr

Note

With the latest version of Apache Solr (4.0 or more), JDK 1.5 is not supported anymore. Apache Solr 4.0+ runs on JDK 1.6 + version. Instead of going for the pre-shipped JDK with your...

The Apache Solr architecture

An Apache Solr instance can run as a single core or multicore; it is a client server model. A Solr core is nothing but the running instance of a Solr index along with its configuration. Earlier, Apache Solr had a single core that in turn limited the consumers to run Solr on one application, through a single schema and configuration file. Later, support for creating multiple cores was added. With this support one can now run one Solr instance for multiple schemas and configurations with unified administrations. You can run Solr in multicore with the following command:

java -Dsolr.solr.home=multicore -jar start.jar

Apache Solr is composed of multiple modules, some of them being separate projects in themselves. Let's understand the different components of the Apache Solr architecture. The following diagram depicts the Apache Solr conceptual architecture:

The Apache Solr architecture

Apache Solr can run in a master-slave mode. Index replicator is responsible for distributing indexes across...

Configuring Solr

Apache Solr allows extensive configuration to meet the needs of the consumer. Configuring the instance revolves around the following:

  • Defining a schema
  • Configuring Solr parameters

First, let's try and understand the Apache Solr structure, and then, look at all these steps to understand the configuration of Apache Solr.

Understanding the Solr structure

The Apache Solr home folder mainly contains the configuration and index-related data. These are the following major folders in the Solr collection:

Directory

Purpose

conf/

This folder contains all the configuration files of Apache Solr and is mandatory. Among them, solrconfig.xml, and schema.xml are important configuration files.

data/

This folder stores the data related to indexes generated by Solr. This is a default location for Solr to store this information. This location can be overridden by modifying conf/solrconfig.xml.

lib/

This folder is optional. If it exists, Solr will load any Jars found in this folder...

Loading data in Apache Solr

Once Apache Solr is configured, the next step is to load data in Apache Solr and run queries. There are different ways to load data into Apache Solr. The following diagram depicts most of the used ones:

Loading data in Apache Solr

We have already seen the simple post tool earlier while setting up Apache Solr. We are going to understand Extracting Request Handler.

Extracting request handler – Solr Cell

Solr Cell is one of the most powerful handlers for uploading any type of data. This is particularly useful if you wish to run Solr on a set of files/unstructured data containing different formats such as office, pdf, eBook, emails, and text. In Apache Tika, text extraction is based purely on file type and content. So, if you have a PDF of scanned images containing text, Apache Tika won't be able to extract any of the text from it. In such cases, you need to use OCR-based software to bring in such functionality for Solr. You can simply try this by downloading the curl utility and then...

Querying for information in Solr

We have already seen how Apache Solr effectively uses different request handlers to provide consumers with extensive ways of getting search results. Each Request Handler uses its own query parser, which extracts the parameters and their values from the query string and forms Lucene Query Objects. The standard query parser allows greater precision over search data; DisMaxQueryParser and Extended DisMaxQueryParser provide a Google-like searching syntax while searching. Depending upon which request handler called, the query syntax is changed. Let's look at some of the important terms:

Term

Meaning

q?<string>

The query string <String> can support wildcards (*:*); for example, title:Scaling*

fl=id,book-name

The field list that a search response will return

sort=author asc

Results/facets to be sorted by authors in an ascending order

price[* TO 100]&rows=10&start=5

Looks for price between 0 and 100; limits the result to...

Setting up Apache Solr


We will be going through the Apache Solr architecture in the next section; for now, let's install Apache Solr on our machines. Apache Solr is a Java Servlet web application that runs on Apache Lucene, Tika, and other open source libraries. Apache Solr ships with a demo server on jetty, so one can simply run it through the command line. This helps users to run the Solr instance quickly. However, you can choose to customize it and deploy it in your own environment. Apache Solr does not ship with any installer; it has to be run as a part of J2EE Application.

Prerequisites for setting up Apache Solr

Apache Solr requires Java 1.6 or more to run, so it is important to make sure you have the correct version of Java by calling java –version, as shown in the following screenshot:

Note

With the latest version of Apache Solr (4.0 or more), JDK 1.5 is not supported anymore. Apache Solr 4.0+ runs on JDK 1.6 + version. Instead of going for the pre-shipped JDK with your default operating...

Left arrow icon Right arrow icon

Description

This book is aimed at developers, designers, and architects who would like to build big data enterprise search solutions for their customers or organizations. No prior knowledge of Apache Hadoop and Apache Solr/Lucene technologies is required.

Who is this book for?

This book is aimed at developers, designers, and architects who would like to build big data enterprise search solutions for their customers or organizations. No prior knowledge of Apache Hadoop and Apache Solr/Lucene technologies is required.

What you will learn

  • Understand Apache Hadoop, its ecosystem, and Apache Solr
  • Explore industrybased architectures by designing a big data enterprise search with their applicability and benefits
  • Integrate Apache Solr with big data technologies such as Cassandra to enable better scalability and high availability for big data
  • Optimize the performance of your big data search platform with scaling data
  • Write MapReduce tasks to index your data
  • Configure your Hadoop instance to handle realworld big data problems
  • Work with Hadoop and Solr using realworld examples to benefit from their practical usage
  • Use Apache Solr as a NoSQL database

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 27, 2015
Length: 166 pages
Edition : 1st
Language : English
ISBN-13 : 9781783553396
Vendor :
Apache
Category :
Concepts :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $24.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 27, 2015
Length: 166 pages
Edition : 1st
Language : English
ISBN-13 : 9781783553396
Vendor :
Apache
Category :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
AU$24.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
AU$249.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts
AU$349.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just AU$5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total AU$ 219.97
Scaling Big Data with Hadoop and Solr, Second Edition
AU$67.99
Apache Solr Search Patterns
AU$75.99
Solr Cookbook - Third Edition
AU$75.99
Total AU$ 219.97 Stars icon
Banner background image

Table of Contents

7 Chapters
1. Processing Big Data Using Hadoop and MapReduce Chevron down icon Chevron up icon
2. Understanding Apache Solr Chevron down icon Chevron up icon
3. Enabling Distributed Search using Apache Solr Chevron down icon Chevron up icon
4. Big Data Search Using Hadoop and Its Ecosystem Chevron down icon Chevron up icon
5. Scaling Search Performance Chevron down icon Chevron up icon
A. Use Cases for Big Data Search Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
(4 Ratings)
5 star 0%
4 star 25%
3 star 50%
2 star 25%
1 star 0%
Winston May 28, 2015
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Great Book....Big data is all the rave these days. As technologists we are faced with ever increasing ways to make sense of our data and organize it in a way that makes best business and personal use. The author does a good job of explaining the uses of Hadoop and Solr. I just wish there was more to read but what was offered has me yearning for more in the next edition hopefully.
Amazon Verified review Amazon
David Jun 10, 2015
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
Good book but requires that you clearly understand the targeted audience. The book is clear and is a must have for administrator of Hadoop and Solr. It explains how to configure correctly and scale such infrastructure. It also address the most common issues and how to deal with them. As such, it will probably save a lot of time and effort to Hadoop/Solr administrators.It also requires the reader to already have a good knowledge of Hadoop which make sense for a booking called scaling ;). If you are new to Hadoop, you should probably start learning on the Internet or go to a book that will introduce all the concepts because at the exception of Chapter 1 which refresh your memory, you will have to know what are the elements mentioned by the author.
Amazon Verified review Amazon
PJG May 21, 2015
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
This book is a good to Solr and how it can be used to tackle distributed search scenarios. The first chapter is an introduction to the Hadoop stack and it gives a good description and overview of HDFS and fundamental MapReduce concepts.Chapter two gives an overview of the architecture of Apache Solr, and describes how you can install and configure it. The third chapter describes the problems which Solr can solve on its own and identifies the benefits of distributed search. It introduces different data processing work flows, and describes the advantages and disadvantages of each work flow. This chapter highlights one of the downsides of the book, namely that it reads like a very theoretical guide, rather than providing hands-on and practical advice.The fourth chapter describes how to integrate Hadoop, Solr, and HBase by using Lily. The chapter ends by describing how to divide the Solr index into multiple shards by using SolrCloud and ZooKeeper.Finally, the last chapter focuses upon optimising the performance of Apache Solr, and this is where the advice is very practical and applicable.Overall, the book contains good material but ideally there would be more on applying the theory covered in practice. At only 166 pages, it feels rather light on content, which is a shame as it's a good quality book overall.
Amazon Verified review Amazon
J. Depeau Nov 08, 2016
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
I bought this book as I needed to learn more about Solr for work, and this looked like a really comprehensive and pretty technical guide. While there is definitely a lot of information to be found in this book, the hard part is actually weeding through everything to get it. This book desperately needs an editor! It's extremely hard to read - the writing is poor and unclear, and it's just generally littered with errors and mistakes. It's a shame, as I believe the author knows the topic and has a lot of knowledge to pass on. But for the money I spent on this book I expect something which is clear, easy to read and understand, and which has been professionally edited.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.