Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Analytics Using Splunk 9.x

You're reading from   Data Analytics Using Splunk 9.x A practical guide to implementing Splunk's features for performing data analysis at scale

Arrow left icon
Product type Paperback
Published in Jan 2023
Publisher Packt
ISBN-13 9781803249414
Length 336 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Dr. Nadine Shillingford Dr. Nadine Shillingford
Author Profile Icon Dr. Nadine Shillingford
Dr. Nadine Shillingford
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Part 1: Getting Started with Splunk
2. Chapter 1: Introduction to Splunk and its Core Components FREE CHAPTER 3. Chapter 2: Setting Up the Splunk Environment 4. Chapter 3: Onboarding and Normalizing Data 5. Part 2: Visualizing Data with Splunk
6. Chapter 4: Introduction to SPL 7. Chapter 5: Reporting Commands, Lookups, and Macros 8. Chapter 6: Creating Tables and Charts Using SPL 9. Chapter 7: Creating Dynamic Dashboards 10. Part 3: Advanced Topics in Splunk
11. Chapter 8: Licensing, Indexing, and Buckets 12. Chapter 9: Clustering and Advanced Administration 13. Chapter 10: Data Models, Acceleration, and Other Ways to Improve Performance 14. Chapter 11: Multisite Splunk Deployments and Federated Search 15. Chapter 12: Container Management 16. Index 17. Other Books You May Enjoy

Exploring Splunk components

A Splunk deployment consists of three key components:

  • Forwarders
  • Indexers
  • Search heads

Forwarders are the data consumers of Splunk. Forwarders run on the source of the data or an intermediate device. Configurations on the forwarder device collect data and pass them on to the indexers. There are two types of forwarders – universal and heavy forwarders. Universal forwarders merely pass on the data to the indexers. Heavy forwarders, however, perform additional tasks, such as parsing and field extractions.

The indexer is the component responsible for indexing incoming data and searching indexed data. Indexers should have a good input/output capacity as they do a lot of reading and writing from disk. Multiple indexers can be combined to form clusters to increase data availability, data fidelity, data recovery, disaster recovery, and search affinity. Users access data in Splunk using search heads. They access data indexed by Splunk by running search queries using a language called Search Processing Language (SPL).

Search heads coordinate searches across the indexers. Like indexers, multiple search heads can be combined to form search head clusters. There are other roles that devices can play in a Splunk deployment. These include deployment servers, deployers, license masters, and cluster masters. The Splunk forwarders send data to the indexers. It’s a one-way transfer of data. The search head interacts with the indexers by sending search requests in the form of bundles. The indexers find the data that fits the search criteria and send the results back to the search heads. Figure 1.3 shows how the three main components interact in a Splunk deployment:

Figure 1.3 – The major Splunk components

Figure 1.3 – The major Splunk components

We will discuss the different Splunk components in detail in the following sections.

Forwarders

A Splunk deployment can have the magnitude of tens of thousands of universal forwarders. As mentioned in the Exploring Splunk components section, there are two kinds of forwarders – the lightweight universal forwarders and the heavy forwarders. Both universal and heavy forwarders perform the following tasks:

  • Assign metadata to incoming data (source, sourcetype, and host)
  • Buffer and compress data
  • Run local scripted inputs
  • Break the data into 64 KB blocks

The universal forwarder is a low-footprint process that is used to forward raw or unparsed data to the indexer layer. However, if you need to do any filtering of the data before it arrives at the indexer layer, it is best to use a heavy forwarder. In a single instance of a Splunk deployment, the forwarder sits on the same device as the indexer and search head.

The universal forwarder can be installed on multiple platforms, including Windows (32- and 64-bit), Linux (64-bit, ARM, s390x, and PPCLE), macOS (Intel and M1/Intel), 64-bit FreeBSD, Solaris (Sparc and 64-bit), and AIX. Heavy forwarders run on the same platforms as Splunk Enterprise. You can install a universal forwarder using a universal forwarder install file, while heavy forwarders are installed using the regular Splunk Enterprise install file.

Both universal and heavy forwarders collect data by using inputs. A Splunk administrator configures inputs using the CLI commands, by editing a configuration file called inputs.conf, or by using Splunk Web (Settings | Add Data). A Splunk forwarder can be configured to accept the following inputs using different settings, such as the following:

  • Files and directories: Monitor new data coming into files and directories. Splunk also has an upload or one-shot option for uploading single files.
  • Network events: Monitor TCP and UDP ports, syslog feeds, and SNMP events.
  • Windows sources: Monitor Windows Event Logs, Perfmon, WMI, registries, and Active Directory.
  • Other sources: Monitor First In, First Out (FIFO) queues, changes to filesystems, and receive data from APIs through scripted inputs.

Important note

HTTP Event Collectors (HEC) inputs allow users to send data events over HTTP and HTTPS using a token-based authentication model. This does not require a Splunk forwarder.

The following code shows a sample of the inputs.conf file from the Splunk add-on for Microsoft Windows:

###### OS Logs ######
[WinEventLog://Application]
disabled = 1
###### DHCP ######
[monitor://$WINDIR\System32\DHCP]
disabled = 1
whitelist = DhcpSrvLog*
[powershell://generate_windows_update_logs]
script =."$SplunkHome\etc\apps\Splunk_TA_windows\bin\powershell\generate_windows_update_logs.ps1"
schedule = 0 */24 * * *
[script://.\bin\win_listening_ports.bat]
disabled = 1
## Run once per hour
interval = 3600
sourcetype = Script:ListeningPorts

Data from the forwarders are sent to the indexers. We will explore indexers in the next section.

Indexers

Splunk forwarders forward data to Splunk indexers. Think of the indexer as the brain of the Splunk deployment. It is the heavy input/output device that not only transforms and stores data but also searches the data based on queries passed down by the search heads. Indexers transform data into Splunk events. These events are then stored in an index, a repository for Splunk data. There are two types of indexes – events and metrics.

Splunk indexes time series data either by extracting timestamps from data or assigning a current datetime. A Splunk index is a collection of directories and subdirectories on the filesystem. These subdirectories are referred to as buckets. Data that arrives at an indexer is passed through pipelines and queues. A pipeline is a thread running on the indexer, while a queue is a memory buffer that holds data between pipelines.

We access data indexed on the indexers using search heads. We will look at search heads in the next section.

Search heads

A search head is a Splunk instance that allows users to search events indexed on the indexers (also referred to as search peers). The average user only interacts with the search head on a Splunk deployment. The user accesses the search head using a browser interface called Splunk Web. Users access data in Splunk using search queries in the Splunk search bar or view dashboards, reports, and other visualizations.

Figure 1.4 is an example of a Splunk bar graph:

Figure 1.4 – Sample Splunk bar graph

Figure 1.4 – Sample Splunk bar graph

Search heads do not index data. Rather, search heads distribute searches to the indexers. The search head parses search queries and decides what accompanying files, called knowledge objects, need to be sent to the indexers. Why is this important? Some files may exist only on the search head. By combining all these files into a knowledge bundle, the search head equips the indexer with all the information (configuration files and assets) it needs to perform the search. It’s almost like the search head offloads its work to the indexers and says, “here are the files that you need to get the work done.” Sometimes, the knowledge bundle contains almost all the search head’s apps. The indexers search their indexes for the data that match the search query and send the results back to the search heads. The search heads then merge the results and present them to the user.

Search queries are written with Splunk’s SPL. Figure 1.5 shows a screenshot of an SPL query typed in the Splunk search bar:

Figure 1.5 – An SPL query

Figure 1.5 – An SPL query

In the next section, we’ll talk about the BOTS Dataset v1, which we will use throughout this book.

You have been reading a chapter from
Data Analytics Using Splunk 9.x
Published in: Jan 2023
Publisher: Packt
ISBN-13: 9781803249414
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime