Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Mastering Kibana 6.x
Mastering Kibana 6.x

Mastering Kibana 6.x: Visualize your Elastic Stack data with histograms, maps, charts, and graphs

eBook
₹799 ₹2621.99
Paperback
₹3276.99
Subscription
Free Trial
Renews at ₹800p/m

What do you get with a Packt Subscription?

Free for first 7 days. ₹800 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Mastering Kibana 6.x

Revising the ELK Stack

Although this book is about Kibana, it doesn't make any sense if we are not aware of the complete Elastic Stack (ELK Stack), including Elasticsearch, Kibana, Logstash, and Beats. In this chapter, you are going to learn the basic concepts of the different software, installation, and their use cases. We cannot use Kibana to its full strength unless we know how to get proper data, filter it, and store it in a format that we can easily use in Kibana.

Elasticsearch is a search engine that is built on top of Apache Lucene, which is mainly used for storing schemaless data and searching it quickly. Logstash is a data pipeline that can practically take data from any source and send data to any source. We can also filter that data as per our requirements. Beats is a single-purpose software that is used to run on individual servers and send data to the Logstash server or directly to the Elasticsearch server. Finally, Kibana uses the data that's stored in Elasticsearch and creates beautiful dashboards using different types of visualization options, such as graphs, charts, histograms, word tags, and data tables. 

In this chapter, we will be covering the following topics:

  • What is ELK Stack?
  • The installation of Elasticsearch, Logstash, Kibana, and Beats
  • ELK use cases

What is ELK Stack?

ELK Stack is a stack with three different open source software—Elasticsearch, Logstash, and Kibana. Elasticsearch is a search engine that is developed on top of Apache Lucene. Logstash is basically used for data pipelining where we can get data from any data source as an input, transform it if required, and send it to any destination as an output. In general, we use Logstash to push the data into Elasticsearch. Kibana is a dashboard or visualization tool, which can be configured with Elasticsearch to generate charts, graphs, and dashboards using our data:

We can use ELK Stack for different use cases, the most common being log analysis. Other than that, we can use it for business intelligence, application security and compliance, web analytics, fraud management, and so on.

In the following subsections, we are going to be looking at ELK Stack's components.

Elasticsearch

Elasticsearch is a full text search engine that can be used as a NoSQL database and as an analytics engine. It is easy to scale, schemaless, and near real time, and provides a restful interface for different operations. It is schemaless, and it uses inverted indexes for data storage. There are different language clients available for Elasticsearch, as follows:

  • Java
  • PHP
  • Perl
  • Python
  • .NET
  • Ruby
  • JavaScript
  • Groovy

The basic components of Elasticsearch are as follows:

  • Cluster
  • Node
  • Index
  • Type
  • Document
  • Shard

Logstash

Logstash is basically used for data pipelining, through which we can take input from different sources and output to different data sources. Using Logstash, we can clean the data through filter options and mutate the input data before sending it to the output source. Logstash has different adapters to handle different applications, such as for MySQL or any other relational database connection. We have a JDBC input plugin through which we can connect to MySQL server, run queries, and take the table data as the input in Logstash. For Elasticsearch, there is a connector in Logstash that gives us the option to seamlessly transfer data from Logstash to Elasticsearch.

To run Logstash, we need to install Logstash and edit the configuration file logstash.conf, which consists of an input, output, and filter sections. We need to tell Logstash where it should get the input from through the input block, what it should do with the input through the filter block, and where it should send the output through the output block. In the following example, I am reading an Apache Access Log and sending the output to Elasticsearch:

input {
file {
path => "/var/log/apache2/access.log"
}
}

filter {
grok {
match => { message => "%{COMBINEDAPACHELOG}" }
}
}

output {
elasticsearch {
hosts => "http://127.0.0.1:9200"
index => "logs_apache"
document_type => "logs"
}
}

The input block is showing a file key that is set to /var/log/apache2/access.log. This means that we are getting the file input and path of the file, /var/log/apache2/access.log, which is Apache's log file. The filter block is showing the grok filter, which converts unstructured data into structured data by parsing it.

There are different patterns that we can apply for the Logstash filter. Here, we are parsing the Apache logs, but we can filter different things, such as email, IP addresses, and dates.

Kibana

Kibana is a dashboarding open source software from ELK Stack, and it is a very good tool for creating different visualizations, charts, maps, and histograms, and by integrating different visualizations together, we can create dashboards. It is part of ELK Stack; hence it is quite easy to read the Elasticsearch data. This does not require any programming skills. Kibana has a beautiful UI for creating different types of visualizations, including charts, histograms, and dashboards.

It provides us with different inbuilt dashboards with multiple visualizations when we use Beats, as it automatically creates multiple visualizations that we can customize to create a useful dashboard, such as for CPU usage and memory usage.

Beats

Beats are basically data shippers that are grouped to do single-purpose jobs. For example, Metricbeat is used to collect metrics for memory usage, CPU usage, and disk space, whereas Filebeat is used to send file data such as logs. They can be installed as agents on different servers to send data from different sources to a central Logstash or Elasticsearch cluster. They are written in Go; they work on a cross-platform environment; and they are lightweight in design. Before Beats, it was very difficult to get data from different machines as there was no single-purpose data shipper, and we had to do some tweaking to get the desired data from servers.

For example, if I am running a web application on the Apache web server and want to run it smoothly, then there are two things that need to be monitored—first, all of the errors from the application, and second, the server's performance, such as memory usage, CPU usage, and disk space. So, in order to collect this information, we need to install the following two Beats on our machine:

  • Filebeat: This is used to collect log data from Apache web server in an incremental way. Filebeat will run on the server and will periodically check for any change in the Apache log. When there is any change in the Apache log file, it will send the log to Logstash. Logstash will receive the data file and execute the filter to find the errors. After filtering the data, it saves the data into Elasticsearch.
  • Metricbeat: This is used to collect metrics for memory usage, CPU usage, disk space, and so on. Metricbeat collects the server metrics, such as memory usage, CPU usage, and disk space, and saves the data into Elasticsearch. Metrics data sends a predefined set of data, and there is no need to modify anything; that is why it sends data directly to Elasticsearch instead of sending it to Logstash first.

To visualize this data, we can use Kibana to create meaningful dashboards through which we can get complete control of our data.

Installing the ELK Stack

For a complete installation of ELK Stack, we first need to install individual components that are explained one by one in the following sections.

Elasticsearch

Elasticsearch 6.0 requires that we have Java 8 at the least. Before you proceed with the installation of Elasticsearch, please ensure which version of Java is present in your system by executing the following command:

java -version
echo $JAVA_HOME

After the setup is complete, we can go ahead and run Elasticsearch. You can find the binaries at www.elastic.co/downloads.

Installing Elasticsearch using a TAR file

First, we will download Elasticsearch 6.1.3.tar, as shown in the following code block:

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.tar.gz

Then, extract it as follows:

tar -xvf elasticsearch-6.1.3.tar.gz

You will then see that a bunch of files and folders have been created. We can now proceed to the bin directory, as follows:

cd elasticsearch-6.1.3/bin

 We are now ready to start our node and a single cluster:

./elasticsearch

Installing Elasticsearch with Homebrew

You can also install Elasticsearch on macOS through Homebrew, as follows:

brew install elasticsearch

Installing Elasticsearch with MSI Windows Installer

Windows users are recommended to use the MSI Installer package. This package includes a graphical user interface (GUI) that guides the users through the installation process.

First, download the Elasticsearch 6.1.3 MSI from https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.3.msi.

Launch the GUI by double-clicking on the downloaded file. On the first screen, select the deployment directories:

Installing Elasticsearch with the Debian package

On Debian, before you can proceed with the installation process, you may need to install the apt-transport-https package first:

sudo apt-get install apt-transport-https

Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

You can install the elasticsearch Debian package with the following code:

sudo apt-get update && sudo apt-get install elasticsearch

Installing Elasticsearch with the RPM package

Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a file named elasticsearch.repo in the /etc/yum.repos.d/ directory for Red Hat-based distributions or in the /etc/zypp/repos.d/ directory for openSUSE-based distributions, containing the following code:

[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Your repository is now ready for use. You can now install Elasticsearch with one of the following commands:

You can use yum on CentOS and older Red Hat-based distributions:

sudo yum install elasticsearch

You can use dnf on Fedora and other newer Red Hat distributions:

sudo dnf install elasticsearch

You can use zypper on openSUSE-based distributions:

sudo zypper install elasticsearch

Elasticsearch can be started and stopped using the service command:

sudo -i service elasticsearch start
sudo -i service elasticsearch stop

Logstash

Logstash requires at least Java 8. Before you go ahead with the installation of Logstash, please check the version of Java in your system by running the following command:

java -version
echo $JAVA_HOME

Using apt package repositories

Download and install the public signing key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

You may need to install the apt-transport-https package on Debian before proceeding, as follows:

sudo apt-get install apt-transport-https

Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list, as follows:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

Run sudo apt-get update and the repository will be ready for use. You can install it using the following code:

sudo apt-get update && sudo apt-get install logstash

Using yum package repositories

Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix (for example, logstash.repo):

[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Your repository is now ready for use. You can install it using the following code:

sudo yum install logstash

Kibana

Starting with version 6.0.0, Kibana only supports 64-bit operating systems.

Installing Kibana using .tar.gz

The Linux archive for Kibana v6.1.3 can be downloaded and installed as follows:

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.1.3-linux-x86_64.tar.gz

Compare the SHA produced by sha1sum or shasum with the published SHA:

sha1sum kibana-6.1.3-linux-x86_64.tar.gz
tar -xzf kibana-6.1.3-linux-x86_64.tar.gz

This directory is known as $KIBANA_HOME

cd kibana-6.1.3-linux-x86_64/

Installing Kibana using the Debian package

Download and install the public signing key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

You may need to install the apt-transport-https package on Debian before proceeding:

sudo apt-get install apt-transport-https

Save the repository definition to /etc/apt/sources.list.d/elastic-6.x.list:

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

You can install the Kibana Debian package with the following:

sudo apt-get update && sudo apt-get install kibana

Installing Kibana using rpm

Download and install the public signing key, as follows:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a file named kibana.repo in the /etc/yum.repos.d/ directory for Red Hat-based distributions, or in the /etc/zypp/repos.d/ directory for openSUSE-based distributions, containing the following code:

[kibana-6.x]
name=Kibana repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Your repository is now ready for use. You can now install Kibana with one of the following commands:

  • You can use yum on CentOS and older Red Hat-based distributions:
sudo yum install kibana
  • You can use dnf on Fedora and other newer Red Hat distributions:
sudo dnf install kibana
  • You can use zypper on openSUSE-based distributions:
sudo zypper install kibana

Installing Kibana on Windows

Download the .zip Windows archive for Kibana v6.1.3 from https://artifacts.elastic.co/downloads/kibana/kibana-6.1.3-windows-x86_64.zip.

Unzipping it will create a folder named kibana-6.1.3-windows-x86_64, which we will refer to as $KIBANA_HOME. In your Terminal, CD to the $KIBANA_HOME directory; for instance:

CD c:\kibana-6.1.3-windows-x86_64

Kibana can be started from the command line as follows:

.\bin\kibana

Beats

After installing and configuring the ELK Stack, you need to install and configure your Beats.

Each Beat is a separately installable product. To get up and running quickly with a Beat, see the getting started information for your Beat:

  • Packetbeat
  • Metricbeat
  • Filebeat
  • Winlogbeat
  • Heartbeat

Packetbeat

The value of a network packet analytics system such as Packetbeat can be best understood by trying it on your traffic.

To download and install Packetbeat, use the commands that work with your system (deb for Debian/Ubuntu, rpm for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win for Windows):

  • Ubuntu:
sudo apt-get install libpcap0.8
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-amd64.deb
sudo dpkg -i packetbeat-6.2.1-amd64.deb
  • Red Hat:
sudo yum install libpcap
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-x86_64.rpm
sudo rpm -vi packetbeat-6.2.1-x86_64.rpm
  • macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/packetbeat/packetbeat-6.2.1-darwin-x86_64.tar.gz
tar xzvf packetbeat-6.2.1-darwin-x86_64.tar.gz
  • Windows:
    1. Download and install WinPcap from this page. WinPcap is a library that uses a driver to enable packet capturing.
    2. Download the Packetbeat Windows ZIP file from the downloads page.
    3. Extract the contents of the ZIP file into C:\Program Files.
    4. Rename the packetbeat-<version>-windows directory to Packetbeat.
    5. Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select Run as administrator). If you are running Windows XP, you may need to download and install PowerShell.
    6. From the PowerShell prompt, run the following commands to install Packetbeat as a Windows service:
PS > cd 'C:\Program Files\Packetbeat'
PS C:\Program Files\Packetbeat> .\install-service-packetbeat.ps1

Before starting Packetbeat, you should look at the configuration options in the configuration file; for example, C:\Program Files\Packetbeat\packetbeat.yml or /etc/packetbeat/packetbeat.yml.

Metricbeat

Metricbeat should be installed as close as possible to the service that needs to be monitored. For example, if there are four servers running MySQL, it's strongly recommended that you run Metricbeat on each service. This gives Metricbeat access to your service from localhost and in turn does not cause any additional network traffic or prevent Metricbeat from collecting metrics when there are network problems. Metrics from multiple Metricbeat instances will be combined on the Elasticsearch server.

To download and install Metricbeat, use the commands that work with your system (deb for Debian/Ubuntu, rpm for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win for Windows), as follows:

  • Ubuntu:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.2.1-amd64.deb
sudo dpkg -i metricbeat-6.2.1-amd64.deb
  • Red Hat:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.2.1-x86_64.rpm
sudo rpm -vi metricbeat-6.2.1-x86_64.rpm
  • macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-6.2.1-darwin-x86_64.tar.gz
tar xzvf metricbeat-6.2.1-darwin-x86_64.tar.gz
  • Windows:
    1. Download the Metricbeat Windows ZIP file from the downloads page.
    2. Extract the contents of the ZIP file into C:\Program Files.
    3. Rename the metricbeat-<version>-windows directory to Metricbeat.
    4. Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select Run as administrator). If you are running Windows XP, you may need to download and install PowerShell.
    5. From the PowerShell prompt, run the following commands to install Metricbeat as a Windows service:
PS > cd 'C:\Program Files\Metricbeat'
PS C:\Program Files\Metricbeat> .\install-service-metricbeat.ps1

Before starting Metricbeat, you should look at the configuration options in the configuration file; for example, C:\Program Files\Metricbeat\metricbeat.yml.

Filebeat

To download and install Filebeat, use the commands that work with your system (deb for Debian/Ubuntu, rpm for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win for Windows), as follows:

  • Ubuntu:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-amd64.deb
sudo dpkg -i filebeat-6.2.1-amd64.deb
  • Red Hat:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-x86_64.rpm
sudo rpm -vi filebeat-6.2.1-x86_64.rpm
  • macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-darwin-x86_64.tar.gz
tar xzvf filebeat-6.2.1-darwin-x86_64.tar.gz
  • Windows:
    1. Download the Filebeat Windows ZIP file from the downloads page.
    2. Extract the contents of the ZIP file into C:\Program Files.
    3. Rename the filebeat-<version>-windows directory to Filebeat.
    4. Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select Run as administrator). If you are running Windows XP, you may need to download and install PowerShell.
    5. From the PowerShell prompt, run the following commands to install Filebeat as a Windows service:
PS > cd 'C:\Program Files\Filebeat'
PS C:\Program Files\Filebeat> .\install-service-filebeat.ps1

Winlogbeat

In order to install Winlogbeat, we need to follow these steps:

  1. Download the Winlogbeat ZIP file from the downloads page.
  2. Extract the contents into C:\Program Files.
  3. Rename the winlogbeat-<version> directory to Winlogbeat.
  4. Open a PowerShell prompt as an administrator (right-click on the PowerShell icon and select Run as administrator). If you are running Windows XP, you may need to download and install PowerShell.
  5. From the PowerShell prompt, run the following commands to install the service:
PS C:\Users\Administrator> cd 'C:\Program Files\Winlogbeat'
PS C:\Program Files\Winlogbeat> .\install-service-winlogbeat.ps1
Security warning: Only run scripts that you trust. Although scripts from the internet can be useful, they can potentially harm your computer. If you trust the script, use Unblock-File  to allow the script to run without this warning message:
Do you want to run
C:\Program Files\Winlogbeat\install-service-winlogbeat.ps1?
[D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): R

Status Name DisplayName
------ ---- -----------
Stopped winlogbeat winlogbeat

Before starting winlogbeat, you should look at the configuration options in the configuration file; for example, C:\Program Files\Winlogbeat\winlogbeat.yml. There's also a full example configuration file named winlogbeat.reference.yml.

Heartbeat

Unlike most Beats, which we install on edge nodes, we typically install Heartbeat as part of a monitoring service that runs on a separate machine and possibly even outside of the network where the services that you want to monitor are running.

To download and install Heartbeat, use the commands that work with your system (deb for Debian/Ubuntu, rpm for Red Hat/CentOS/Fedora, macOS for OS X, Docker for any Docker platform, and win for Windows):

  • Ubuntu:
curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-6.2.1-amd64.deb
sudo dpkg -i heartbeat-6.2.1-amd64.deb
  • Red Hat:
curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-6.2.1-x86_64.rpm
sudo rpm -vi heartbeat-6.2.1-x86_64.rpm
  • macOS:
curl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-6.2.1-darwin-x86_64.tar.gz
tar xzvf heartbeat-6.2.1-darwin-x86_64.tar.gz
  • Windows:
    1. Download the Heartbeat Windows ZIP file from the downloads page.
    2. Extract the contents of the ZIP file into C:\Program Files.
    3. Rename the heartbeat-<version>-windows directory to Heartbeat.
    4. Open a PowerShell prompt as an administrator (right-click the PowerShell icon and select Run as administrator). If you are running Windows XP, you may need to download and install PowerShell.
    5. From the PowerShell prompt, run the following commands to install Heartbeat as a Windows service:
PS > cd 'C:\Program Files\Heartbeat'
PS C:\Program Files\Heartbeat> .\install-service-heartbeat.ps1

 

Before starting Heartbeat, you should look at the configuration options in the configuration file; for example, C:\Program Files\Heartbeat\heartbeat.yml or /etc/heartbeat/heartbeat.yml.

ELK use cases

ELK Stack has many different use cases, but here we are only going to discuss some of them.

Log management

In any large organization, there will be different servers with different sets of applications. So, in this case, we need to have different teams for different applications whose task is to explore the log files for debugging any issue. However, this is not an easy task, as the format of logs is never user friendly. Here, I am talking about a single application, but what will happen if we ask the team to monitor all different applications that are built using different technologies and their log format is very different from other applications? The answer is very simple: the team has to dig through all the logs from the different servers and then they will spend days and nights to find the issue.

ELK Stack is very useful for these situations, and we can solve this problem easily. First of all, we need to set up a central Elasticsearch cluster for collecting all different logs. Now, we need to configure Logstash as per the application log so that we can transform different log formats that we are getting from different application servers. Logstash will output this data into Elasticsearch for storage so that we can explore, search, and update the data. Finally, Kibana can be used to display graphical dashboards on top of Elasticsearch.

Using this setup, anyone can get complete control of all logs coming from different sources. We can use Kibana to alert us to any issues in the log file so that the user can get the issue without doing any data drill downs.

Many organizations are using ELK for their log management as this is an open source software that can be built easily to monitor different type of logs on a single screen. Not only can we monitor all of our logs in a single screen, but we can also get alerts if something went wrong in the logs.

Security monitoring and alerting

Security monitoring and alerting is a very important use case of ELK Stack as application security is a vital part, and it costs if there are any security breaches in the application since security breaches are becoming more common, and most importantly, more targeted. Although enterprises are regularly trying to improve their security measures, hackers are successful in penetrating the security layers. Therefore, it is very much required for any enterprise to detect the presence of security attacks on their server, and not only detect but also alert them so that they can take immediate actions to mitigate their losses. Using ELK Stack, we can monitor various things, such as unusual server requests and any suspicious traffic. We can gather security-related log information that can be monitored by security teams to check any alerts to the system.

This way, security teams can prevent the enterprise from attackers who have gone unnoticed for a long time. ELK Stack provides a way through which we can gain an insight and make the attacker's life more difficult. These logs can also be very useful for after-attack analysis; for example, for finding out the time of the attack and the method of attack used. We can understand the activities the attacker performed to attack, and this information can provide us with a way to strengthen that loophole easily. In this way, ELK Stack is useful for both before attack prevention and after attack healing and prevention.

Web scraping

In ELK Stack, we have different tools to grab data from remote servers. In traditional Relational Database Management System (RDBMS), it is quite difficult to save these types of data because they are not structured, so either we have to manually clean the data or leave some part of it in order to save it in the table schema. In the case of Elasticsearch, the schemaless behavior gives us the leverage to push any data from any source. It not only holds that data but also provides us with a feature to search and play with it. An example of web scraping using ELK Stack is a Twitter to Elasticsearch connector, which allows us to set up hashtags from Twitter and grab all the tweets that used those hashtags. After grabbing those hashtags, we can search, visualize, and analyze them in Kibana.

E-commerce search solutions

Many of the top e-commerce websites, such as eBay's, are using Elasticsearch for their product search pages. The main reason behind this is the ability of Elasticsearch in full-text searching, building filters, facets, aggregations, fast response time, and the ease it provides in collecting analytic information. Users can easily drill down to get the product set, from where they can easily select the product they want. This is just one side of the picture, through which we are improving the user's experience. On the other side, we can use the same data and by using Kibana, we can monitor the trends, analyze the data, and much more.

There is a big competition going on among e-commerce companies to attract more and more customers. Being able to understand the shopping behavior of their customers is a very important feature, as it leverages e-commerce companies to target users with products that they had liked or will like. This is business intelligence, and using ELK Stack, they can achieve it.

Full text search

ELK Stack's core competency is its full text search feature. It is powerful and flexible, and it provides various features such as fuzzy search, conditional searching, and natural language searching. So, as per our requirements, we can decide which type of searching is required. We can use ELK Stack's full text search capabilities for product searching, autocomplete features, searching text in emails, and so on.

Visualizing data

Kibana is an easy-to-use visualization tool that provides us with a rich feature set to create beautiful charts (such as pie charts, bar charts, and stack charts), histograms, geo maps, word tags, data tables, and so on. Visualizing data is always beneficial for any organization as it helps top management to make decisions with ease. We can also easily track any unusual trends and find any outliers in data without digging into the data. We can create dashboards for any existing web-based application as well by simply pushing the application data into Elasticsearch and then use Kibana to create beautiful dashboards. This way, we can plug in an additional dimension into the application and start monitoring it without putting any additional load on the application.

Summary

In this chapter, we covered the basics of ELK Stack and their characteristics. We explained how we can use Beats to send logs data, file data, and system metrics to Logstash or Elasticsearch and that Logstash can be configured as a pipeline to modify the data format and then send the output to Elasticsearch. Elasticsearch is a search engine built on top of Lucene. It can store data and provide functionality to do full text searching on data. Kibana can be configured to read Elasticsearch data and create visualizations and dashboards. We can embed these dashboards on existing web pages, which can then be used for decision-making. 

Then, we discussed different use cases of ELK Stack. The first one we mentioned was log management, which is the primary use case of ELK Stack and which made it famous. In log management, we can capture logs from different servers/sources and dump them in a central Elasticsearch cluster after modifying it through Logstash. Kibana is used to create meaningful graphical visualization and dashboards by reading the Elasticsearch data. Finally, we discussed security monitoring and alerting, where ELK Stack can be quite helpful. Security is a very important aspect of any software, and often it is the most neglected part of development and monitoring. Using ELK Stack, we can observe any security threat.

Left arrow icon Right arrow icon

Key benefits

  • Explore visualizations and perform histograms, stats, and map analytics
  • Unleash X-Pack and Timelion, and learn alerting, monitoring, and reporting features
  • Manage dashboards with Beats and create machine learning jobs for faster analytics

Description

Kibana is one of the popular tools among data enthusiasts for slicing and dicing large datasets and uncovering Business Intelligence (BI) with the help of its rich and powerful visualizations. To begin with, Mastering Kibana 6.x quickly introduces you to the features of Kibana 6.x, before teaching you how to create smart dashboards in no time. You will explore metric analytics and graph exploration, followed by understanding how to quickly customize Kibana dashboards. In addition to this, you will learn advanced analytics such as maps, hits, and list analytics. All this will help you enhance your skills in running and comparing multiple queries and filters, influencing your data visualization skills at scale. With Kibana’s Timelion feature, you can analyze time series data with histograms and stats analytics. By the end of this book, you will have created a speedy machine learning job using X-Pack capabilities.

Who is this book for?

Mastering Kibana 6.x is for you if you are a big data engineer, DevOps engineer, or data scientist aspiring to go beyond data visualization at scale and gain maximum insights from their large datasets. Basic knowledge of Elasticstack will be an added advantage, although not mandatory.

What you will learn

  • Create unique dashboards with various intuitive data visualizations
  • Visualize Timelion expressions with added histograms and stats analytics
  • Integrate X-Pack with your Elastic Stack in simple steps
  • Extract data from Elasticsearch for advanced analysis and anomaly detection using dashboards
  • Build dashboards from web applications for application logs
  • Create monitoring and alerting dashboards using Beats

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 31, 2018
Length: 376 pages
Edition : 1st
Language : English
ISBN-13 : 9781788831031
Vendor :
Apache
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. ₹800 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jul 31, 2018
Length: 376 pages
Edition : 1st
Language : English
ISBN-13 : 9781788831031
Vendor :
Apache
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
₹800 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
₹4500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts
₹5000 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 6,980.98 16,460.97 9,479.99 saved
Mastering ElasticSearch 6.x and the Elastic Stack (V)
₹799 ₹10278.99
Mastering Kibana 6.x
₹3276.99
Learning Elastic Stack 6.0
₹2904.99
Total 6,980.98 16,460.97 9,479.99 saved Stars icon
Banner background image

Table of Contents

15 Chapters
Revising the ELK Stack Chevron down icon Chevron up icon
Setting Up and Customizing the Kibana Dashboard Chevron down icon Chevron up icon
Exploring Your Data Chevron down icon Chevron up icon
Visualizing the Data Chevron down icon Chevron up icon
Dashboarding to Showcase Key Performance Indicators Chevron down icon Chevron up icon
Handling Time Series Data with Timelion Chevron down icon Chevron up icon
Interact with Your Data Using Dev Tools Chevron down icon Chevron up icon
Tweaking Your Configuration with Kibana Management Chevron down icon Chevron up icon
Understanding X-Pack Features Chevron down icon Chevron up icon
Machine Learning with Kibana Chevron down icon Chevron up icon
Create Super Cool Dashboard from a Web Application Chevron down icon Chevron up icon
Different Use Cases of Kibana Chevron down icon Chevron up icon
Creating Monitoring Dashboards Using Beats Chevron down icon Chevron up icon
Best Practices Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
suneet srivastava Sep 07, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Excellent work by the author....this book is extremely useful for data management and presentation tool ....the best part I found in this book is the easy language by the Indian author and he is explained the concept in very structured manner
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.