Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Splunk 7.x Quick Start Guide
Splunk 7.x Quick Start Guide

Splunk 7.x Quick Start Guide: Gain business data insights from operational intelligence

eBook
$9.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Splunk 7.x Quick Start Guide

Introduction to Splunk

Welcome to the first chapter of the Splunk 7.x Quick Start Guide! This chapter introduces Splunk to the newcomer and guides them progressively toward understanding the reasons why Splunk is so popular. It introduces all the powerful capabilities and solutions it offers for collecting and analyzing machine data from a wide variety of devices and environments. This chapter also includes a high-level overview of how Splunk works to serve as a foundation for digging into more details in the chapters to come.

The topics that are covered in this chapter include the following:

  • Understanding what Splunk is and what problems it solves
  • Installing a free version of Splunk Enterprise
  • Becoming familiar with the major components of a Splunk solution and their functions
  • Becoming aware of the major processing tiers of a Splunk deployment—data input, parsing, indexing, and search
  • Learning about the four key Splunk fields for every event—_time, host, source, and sourcetype—and why they're important
  • Becoming aware of the Splunk community and all the information and resources available to learn more about configuring and using Splunk

What is Splunk?

Okay—so what is Splunk, anyway? How do you explain this product to your peers, friends, and family in a way that is easy to comprehend without watering down its awesome capabilities? Here's how I explain it, with an introductory setting and then increasing levels of powerful uses—until I notice their eyes starting to glaze over, at which point I stop and summarize again: "It's like Google for all kinds of machine data!"

Every company will have tens, hundreds, or maybe thousands of application and web servers, databases, and network devices such as switches, routers, and firewalls; all kinds of sensors, and so on—and all of these create log files or data streams that record their activities and statuses over time. Now, imagine needing to troubleshoot a problem that might be caused by any one of several parts of a system, and having to log into each of these machines one at a time, manually dig through its log file looking for clues, then log into the next, and so on—you can see how tedious and time-consuming this can become. Or maybe you want to monitor critical processes to make sure things are running well—how do you do that for a lot of machines?

Splunk is a software platform that collects and stores all this machine data in one place. It makes it as easy to search through and investigate that data as using Google. Basically, it's Google for log files! Beyond troubleshooting, you can use this search capability to build reports and dashboards to monitor performance, reliability, or other metrics across a whole collection of related servers and devices, and even create alerts to warn you by text or email when something is going wrong. It's also used to detect security threats, and since you have all this data in one place, you can do event correlation across devices and apply machine learning to it for the purposes of anomaly detection, user behavior analytics, and even predictive analytics to identify potential problems before they happen.

Splunk has a media kit brochure that covers the spectrum of ways Splunk helps companies extract value from their machine data, which can be found at: https://www.splunk.com/en_us/newsroom/media-kit.html.

The following diagram illustrates the spectrum of data you can collect with Splunk, and captures the essence of what Splunk does:

Splunk data sources and use cases

Splunk products

There are several types and licensing models of Splunk available to suit the needs of its customer base.

Splunk Enterprise is designed for on-premise deployments; it can be scaled to support an unlimited number of users and ingestion volumes by adding the necessary number and types of Splunk functional software components (indexers and search heads) on customer-supplied servers. The cost of a Splunk Enterprise license is based on daily ingestion volume. A wide range of applications for Splunk Enterprise, written by both Splunk and the user community, that add value to the product are available for free from the Splunkbase website. Splunk also sells several sophisticated premium solution applications such as IT Service Intelligence, Enterprise Security, and User Behavior Analytics on an individual license basis, and a Machine Learning Toolkit is available for free from Splunkbase if you want to roll out your own ML solutions.

Splunk Cloud is a cloud-based software as a service (SaaS) version of Splunk Enterprise; it is also licensed based on daily ingestion volume, and offers all the functionality of the on-premise Splunk Enterprise product. The core Splunk infrastructure is provided and managed by Splunk, while data inputs, approved apps, reports, dashboards, alerts, and so on are managed by the customer.

Splunk Light is designed to be a small-scale solution. It allows up to 20 GB/day of ingestion volume, five users, and reports/dashboards/alerts, but does not provide support for distributed deployments, Splunkbase or premium solution apps (except for an app for Amazon Web Services (AWS)), and several other limitations.

Splunk Free is a free version of the core Splunk Enterprise product that has limits on users (one user), ingestion volume (500 MB/day), and other features; also, it cannot be used for a deployed/clustered configuration.

You can compare the available features of each product here: https://www.splunk.com/en_us/software/features-comparison-chart.html.

The history of Splunk

Splunk was designed from the beginning to allow people who were troubleshooting IT problems to search through their log files as easily as if using a search engine like Google. As a product, Splunk was conceived by founders Rob Das and Erik Swan between 2002 and 2004 after asking a number of people at 60-70 companies, how do you find problems in your infrastructure today? The answer was consistently that they would go through log files, and when asked about what tools they used, they tended to answer that they would write their own scripts and that they do everything manually. They also commented that they used Google to search for posts from other people who had solved the same problem. Rob and Erik thought, why not create a tool that allows people to search through log files as easily as they search the web using something like Google? So they built a prototype to demo at Linux World with the tagline of Google for log files, and since they offered a free download, people tried it, liked it, and told their friends—from there, Splunk spread virally from person to person and company to company.

The name Splunk was derived from asking people, what is it like to troubleshoot in your environment? One answer was that it was like digging through caves with headlamps and helmets on, and crawling through the muck trying to find the problem, and it took forever. A common term for going through caves is spelunking, so they decided to call the product Splunk. From that beginning, Splunk has evolved into the powerful big data and business analytics tool that the founders envisioned from the start.

You can watch Rob and Erik tell their story here: https://www.splunk.com/view/SP-CAAAGBY.

Installing Splunk for free

It will be very helpful while you're learning about Splunk to have an environment where you can navigate through directories, inspect and modify configuration files, and experiment freely with Splunk menu options and features without jeopardizing a Splunk installation in your Enterprise, even if you have the access rights to do so. You can install a free version of Splunk on your laptop or personal server that provides this flexibility from this link: https://www.splunk.com/en_us/download/splunk-enterprise.html.

Complete the form to create a Splunk account (or log in if you already have one), click to agree to the Splunk Software License Agreement, and click Create your account. From the downloads page, click the Windows, Linux, or macOS tabs at the top and select the version appropriate for your platform. You will need to agree to the software license again, then click Start Your Download Now, and save the file to disk. The next web page that appears provides links to installation videos and documents for your chosen environment; the installation process is very straightforward, so you shouldn't have any trouble.

If you choose to Launch browser with Splunk Enterprise when the installation finishes, you will log in with the default username of admin, and password of changeme. Change the password upon the first login.

And as they say at the end of the installation videos—dig in and start exploring!

Splunk components

One of the first and most important things you need to learn about Splunk in order to work with it effectively is what the functional components are and how they work together. Here is a list:

  • Universal forwarder
  • Indexer and indexer clusters
  • Search head and search head clusters
  • Deployment server
  • Deployer
  • Cluster master
  • License master
  • Heavy forwarder

Universal forwarders, indexers, and search heads constitute the majority of Splunk functionality; the other components provide supporting roles for larger clustered/distributed environments. We'll summarize each of these here and dig into more details in chapters to come.

In very small installations, you can install Splunk Enterprise on a single server, which will provide all the indexing and search functionality in one instance. However, in most cases, but in most cases you will need to scale Splunk up to handle higher data volumes and more users, so the functions that we just listed will be broken out onto multiple servers to support scalability and provide redundancy for higher reliability.

The following diagram shows all the Splunk components involved in a larger Splunk deployment and how they fit together:

Splunk components in a distributed deployment

The universal forwarder (UF) is a free small-footprint version of Splunk Enterprise that is installed on each application, web, or other type of server (which may be running various flavors of Linux or Windows operating systems) to collect data from specified log files and forward this data to Splunk for indexing (storage). In a large Splunk deployment, you may have hundreds or thousands of forwarders that consume and forward data for indexing.

An indexer is the Splunk component that creates and manages indexes, which is where machine data is stored. Indexers perform two main functions: parsing and storing data, which has been received from forwarders or other data sources into indexes, and searching and returning the indexed data in response to search requests.

An indexing cluster is a group of indexers that have been configured to work together to handle higher volumes of both incoming data to be indexed and search requests to be serviced, as well as providing redundancy by keeping duplicate copies of indexed data spread across the cluster members. Be aware that index cluster members can be called both peer nodes and search peers in Splunk documentation, depending on context, which can be a bit confusing until you get used to the nomenclature.

A search head is an instance of Splunk Enterprise that handles search management functions. This includes providing a web-based user interface called Splunk Web, from which users issue search requests in what is called Search Processing Language (SPL). Search requests initiated by a user (or a report or dashboard) are sent to one or more indexers to locate and return the requested data; the search head then formats the returned data for presentation to the user.

A search head cluster is a group of multiple search heads configured to work together to service larger numbers of users and provide redundancy for servicing search requests. Search head cluster members are called cluster members in Splunk documentation.

The following screenshot is an example of executing a simple search in Splunk Web. The SPL specifies searching in the _internal index, which is where Splunk saves data about its internal operations, and provides a count of the number of events in each log for Today. The SPL command specified an index, and then pipes the returned results to the stats command to return a count of all the events by their source and sourcetype (we'll discuss all this a bit further along in this chapter):

index=_internal | stats count by source, sourcetype

Following is the screenshot for simple search in Splunk:

Simple search in Splunk Web

A deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for a number of Splunk components, but which in practice is used to manage UFs. For instance, all of the hundreds or thousands of UFs that may be installed on servers in an Enterprise environment can periodically "phone home" to the deployment server to pick up new configuration information, making the task of managing all these UFs much easier.

A deployer is a Splunk Enterprise instance that is used to distribute Splunk apps and certain other configuration updates to search head cluster members.

A cluster master is a Splunk Enterprise instance that coordinates the activities of an indexing cluster. In an index cluster, data is distributed across multiple indexer instances to provide redundancy; the cluster master takes care of both controlling how that data is distributed, and helping search heads know where to direct their search requests to find specific sets of data. A cluster master is also called a master node in Splunk documentation.

A license master is a single Splunk Enterprise instance that provides a licensing service for the multiple instances of Splunk that have been deployed in a distributed environment. If you have a standalone instance of Splunk Enterprise, or a standalone indexer, you can install a license locally on that machine; otherwise, you will need a license master.

A heavy forwarder is an instance of Splunk Enterprise that can receive data from other forwarders or data sources and parse, index, and/or send data to another Splunk instance for indexing. A heavy forwarder has some features disabled so that it doesn't consume as many resources as an indexer, but retains most of an indexer's capability except that it cannot service search requests. A heavy forwarder is often used to parse incoming data and route it to various destinations for indexing depending upon the source and type of data received, and/or to speed up processing and reduce the processing load on indexers, but its use is optional.

Splunk Enterprise also has a monitoring tool function called the monitoring console, which lets you view detailed topology and performance information about your entire distributed deployment from one interface. This function can be configured to run on one of the other Splunk components such as the cluster master or deployer if resources allow.

If this seems like a lot of functionality to absorb when you're just getting started, don't worry—you only really need to focus on getting comfortable with the data input, indexing, and search functions for now; the other pieces play smaller roles in day-to-day Splunk administration activities.

Splunk processing tiers

It is also helpful to become familiar with the Splunk processing tiers and what is called the data pipeline, which consists of the transformation functions that occur as data traverses through Splunk software, in order to—better understand how the various components are configured to work together.

As data that Splunk receives from forwarders or other data sources moves through the data pipeline, Splunk transforms it into searchable events that get indexed. The data pipeline has four segments, as illustrated in the following diagram:

Splunk data pipeline

In the Data Input segment, Splunk consumes the raw data stream from forwarders or other data sources, breaks it into 10K blocks, and annotates each block with metadata such as host, source, sourcetype, and the index the data is destined for per the settings in a configuration file.

In the Parsing segment, Splunk breaks data in these blocks into individual events, and identifies, parses, and assigns timestamps to each event. The metadata from the block of data the event came from is assigned to each event, and any required transformations to the event data or metadata are completed.

A Splunk event is a single piece of data in Splunk. Typically, an event represents a single line entry in a log file or data stream, but an event can contain multiple lines such as from a Java stacktrace. We'll cover events in more detail in the next section.

In the Indexing phase, Splunk writes the individual events to the specified index on disk. It writes both compressed raw data and index files; index files contain pointers to the raw data as well as some metadata to facilitate and speed up the search process.

In a distributed deployment, parsing and indexing are usually referenced together as the indexing process, as these functions are both handled on an indexer. This is okay at a high level, but if you need to inspect the processing of data more closely or determine how to scale and allocate Splunk components, you may need to consider the two segments individually.

The Search segment manages all the aspects of how a user searches, views, and uses the indexed data. It parses and interprets the SPL search commands, requests and receives data from indexers, and formats and presents the results to the user. The search function in Splunk also stores and uses user-created knowledge objects such as event types, tags, macros, field extractions, and so on.

In a distributed deployment, the search function is handled on a search head; in a single-instance Splunk Enterprise server, the input, parsing, indexing, and search functions are all accomplished on that single instance, as you might expect. We will cover all these functions, and how they're configured and interact, in much more detail in the next several chapters.

Splunk events

As we discovered in the previous section, Splunk creates events from each entry in a log file or data stream. You can search for specific types of events, within specified time frames, using SPL in Splunk Web. For example, let's say you create a search on the instance of Splunk on your laptop using the SPL command:

index=_internal sourcetype=splunk_web_access

if you press Enter, Splunk will return a number of events for Today or any other time frame you've selected in the Time Range drop-down. These events come from Splunk's internal web server, and reflect the format and fields that are typical of a web log:

Splunk events from a simple search

We'll cover all of the features and details of using Splunk Web in Chapter 6, Searching with Splunk, so for now let's focus on some of the most important and useful fields in the events themselves.

The following is a screenshot of a typical Splunk event:

A typical Splunk event

Regardless of the data source or type, Splunk always tags each event with a number of default fields; some of these come from the metadata mentioned in the discussion about the data pipeline in the previous section, and others are added at index time. There are four of these fields that you will want to become familiar with right away, as they are used extensively for filtering arguments in your SPL commands to return the events of interest. In the preceding screenshot, these key fields have been circled in red – they are as follows:

  • _time (timestamp)
  • host
  • source
  • sourcetype

The date and time reflected in the Time column is the timestamp assigned to the event, which Splunk stores in a _time field. When an event contains a timestamp, as this one does, that is, [11/Jun/2018:21:12:35.441 -0400], Splunk will parse that timestamp and save it in the _time field as an epoch value (number of seconds since 00:00:00 coordinated universal time (UTC), Thursday, 1 January 1970). If an event does not contain a timestamp, Splunk will assign the time the event was indexed to the _time field. Splunk displays this _time value in the date-time format, as seen previously, corrected for the time zone specified in the Splunk Web account settings—we'll cover this in more detail in the chapter on Splunk search.

The host field is the name or IP address of the physical device from which an event originates. You can use this field to create filters to return events from a specific host. In the preceding example, the host is a Splunk server called robotdev.

The source field identifies where the event originated. In the case of data obtained from log files, the source consists of the full pathname of the file; in the case of a network-based source, this field contains the protocol and port, such as UDP: 514. In this example, the event came from a Splunk log file in the /opt/splunk/var/log/splunk directory called web_access.log.

The sourcetype field identifies the data structure of an event (what fields the event contains, where they are, and how they're formatted), and determines how Splunk parses the data into specified fields during the indexing process. Splunk Enterprise comes with a large set of predefined source types for known data source types, and will assign the correct sourcetype to your data if it recognizes the format. You can use the sourcetype field in searches to find all the data of a certain type, regardless of the source. In the preceding example, the sourcetype is a Splunk-specific type called splunk_web_access.

The other important field that is not displayed in search results, but is essential for writing SPL commands to perform searches is the index, which as you can see was specified in the SPL command used to return the example events previously. Splunk has four internal indexes: _audit, _internal, _introspection, and _telemetry; you can view the data in these to get familiar with events in the short term. You will create and use custom indexes to store data from your company's host and device logs, and specify those indexes in your search strings.

Splunk information resources

The Splunk website has a landing page with links to a vast array of additional information on configuring and using Splunk: https://www.splunk.com/en_us/community.html.

Some of the more useful of these sources include the following:

Splunk documentation is, as you might expect, an extensive collection of documentation and tutorials that covers all aspects of all of Splunk's products. In the Splunk Enterprise documentation landing page, for example, you'll find tabs for Get started, Search and report, Administer, and so on with links to manuals for related topics. The Admin Manual and Getting Data In manuals under the Administer tab are particularly useful for administering Splunk.

Splunk answers is a collection of questions posted by Splunk users and administrators with answers provided by experts in the field who have solved those particular problems. If you use a search engine such as Google to look for an answer to an issue working with Splunk, here or in Splunk Docs is usually where you'll end up.

Splunk education offers a number of Learning Paths for users, Splunk administrators, architects, developers, and so on, which I highly recommend. While this Quick Start Guide is intended to introduce and guide you through the most important and useful features of Splunk from one source, and provide some practical experience and examples of common configurations and challenges you'll encounter, you will want to pursue a formal education path if you wish to obtain a deeper understanding of Splunk and achieve the various levels of Splunk certifications such as Certified User, Power User, Admin, Architect, or Developer.

I also recommend that you download, print, and peruse the Splunk Quick Reference Guide: https://www.splunk.com/pdfs/solution-guides/splunk-quick-reference-guide.pdf.

This is an excellent resource both to accompany the chapters that follow and to keep by your side as you perform day-to-day Splunk administration tasks, as it covers the most significant concepts and information about Splunk as well as a great deal of reference material that is helpful for performing searches using the Search Processing Language.

Summary

That wraps up Chapter 1! We've covered what Splunk is and why it's great, you got a free copy of Splunk installed on your laptop so that you can experiment freely with its files and features, and most importantly, started building a basic mental model of how Splunk works and introduced the key concepts around getting data in, parsed, and indexed. We also performed a basic search and got familiar with events and their four key fields. Finally, we covered all the resources that are available to expand your knowledge of Splunk far beyond what can be covered in a Quick Start guide.

In the next chapter, we'll go into the details of designing and implementing a Splunk Enterprise environment from scratch!

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Understand the various components of Splunk and how they work together to provide a powerful Big Data analytics solution.
  • Collect and index data from a wide variety of common machine data sources
  • Design searches, reports, and dashboard visualizations to provide business data insights

Description

Splunk is a leading platform and solution for collecting, searching, and extracting value from ever increasing amounts of big data - and big data is eating the world! This book covers all the crucial Splunk topics and gives you the information and examples to get the immediate job done. You will find enough insights to support further research and use Splunk to suit any business environment or situation. Splunk 7.x Quick Start Guide gives you a thorough understanding of how Splunk works. You will learn about all the critical tasks for architecting, implementing, administering, and utilizing Splunk Enterprise to collect, store, retrieve, format, analyze, and visualize machine data. You will find step-by-step examples based on real-world experience and practical use cases that are applicable to all Splunk environments. There is a careful balance between adequate coverage of all the critical topics with short but relevant deep-dives into the configuration options and steps to carry out the day-to-day tasks that matter. By the end of the book, you will be a confident and proficient Splunk architect and administrator.

Who is this book for?

This book is intended for experienced IT personnel who are just getting started working with Splunk and want to quickly become proficient with its usage. Data analysts who need to leverage Splunk to extract critical business insights from application logs and other machine data sources will also benefit from this book.

What you will learn

  • Design and implement a complex Splunk Enterprise solution
  • Configure your Splunk environment to get machine data in and indexed
  • Build searches to get and format data for analysis and visualization
  • Build reports, dashboards, and alerts to deliver critical insights
  • Create knowledge objects to enhance the value of your data
  • Install Splunk apps to provide focused views into key technologies
  • Monitor, troubleshoot, and manage your Splunk environment
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 29, 2018
Length: 298 pages
Edition : 1st
Language : English
ISBN-13 : 9781789531091
Category :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Nov 29, 2018
Length: 298 pages
Edition : 1st
Language : English
ISBN-13 : 9781789531091
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 148.97
Splunk 7 Essentials, Third Edition
$43.99
Splunk Operational Intelligence Cookbook
$60.99
Splunk 7.x Quick Start Guide
$43.99
Total $ 148.97 Stars icon
Banner background image

Table of Contents

11 Chapters
Introduction to Splunk Chevron down icon Chevron up icon
Architecting Splunk Chevron down icon Chevron up icon
Installing and Configuring Splunk Chevron down icon Chevron up icon
Getting Data into Splunk Chevron down icon Chevron up icon
Administering Splunk Apps and Users Chevron down icon Chevron up icon
Searching with Splunk Chevron down icon Chevron up icon
Splunk Knowledge Objects Chevron down icon Chevron up icon
Splunk Reports, Dashboards, and Alerts Chevron down icon Chevron up icon
Splunk Applications Chevron down icon Chevron up icon
Advanced Splunk Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Fred Bronaugh III May 15, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very good product helping me understand the platform very much
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela