Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Kibana 7 Quick Start Guide

You're reading from   Kibana 7 Quick Start Guide Visualize your Elasticsearch data with ease

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781789804034
Length 172 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Anurag Srivastava Anurag Srivastava
Author Profile Icon Anurag Srivastava
Anurag Srivastava
Arrow right icon
View More author details
Toc

What this book covers

Chapter 1, Introducing Kibana, introduces Elastic Stack, where we explain the different components of Elastic Stack, including Elasticsearch, Logstash, Kibana, and different Beats. The introduction is followed by an explanation of the different use cases of Elastic Stack, including System Performance Monitoring, where we monitor system performance, Log Management, where we collect different logs and monitor them from a central location, Application Performance Monitoring, where we monitor our application by connecting it to a central APM server, Application Data Analysis, where we analyze the application data, Security Monitoring and Alerting, where we can secure our stack using X-Pack and monitor it regularly, while also being able to configure alerts to keep an eye on any change that may impact system performance, and finally Data Visualization, where we use Kibana to create different types of visualizations using available data.

Chapter 2, Getting Data into Kibana, covers different ways to get data in Elasticsearch. We examine how Beats can be installed on a server to send data, since they are lightweight data shippers. Under Beats, we cover Filebeat, for reading file data, including apache logs, system logs, and application logs, and can then send these logs to Elasticsearch directly or using Logstash. We configure Metricbeat to read system metrics, such as CPU usage, memory usage, MySQL metrics, and Packetbeat, by means of which we can read network packet data to glean insights from it. After that, we cover how Logstash can be used to get the data and apply filters before sending it to Elasticsearch.

In the first section, we cover how to fetch CSV data using Logstash, where we pass a CSV file as input and specify the columns to send the data to Elasticsearch. After that, we explain how to configure the JDBC plugin to fetch MySQL data by running the SQL statement and applying the tracking column, by means of which the incremental data can be fetched in Logstash. After reading the MySQL data, it is pushed to Elasticsearch for analysis. Using Beats and Logstash, we can push data into Elasticsearch but, in order to analyze and visualize the data, we need this data in Kibana and, for that, we have to create index patterns in Kibana. Once the index pattern is created, we can see the data under the Discover option in Kibana, where we can apply a filter, run queries, and select fields to display.

Chapter 3, Exploring Data, describes Kibana Discover, and how we can explore data using Discover. In the beginning, we cover how to discover your data by means of different options provided in Kibana Discover, including how to limit the number of fields to display in order to focus on the dataset, which is more relevant than the other not so relevant fields. Then, we discover how to expand a document display to check all available fields, along with the option to view surrounding documents and single documents. From this screen, we can also apply the filter to any field. Then, we cover different ways to dissect our data, including filtering the data by applying the time-based filter, filtering the data based on different document fields, and applying queries to your dataset. We then explore how to save the searched data so that this search data, along with filter options, can be available to us whenever we want to use them again. After saving the search data, we can also export it from Kibana and save it into a file that can later be imported back into Kibana.

Chapter 4, Visualizing Data, explains how to visualize the data once it is available in Kibana after creating the index pattern. We begin with basic charts, where we cover a number of chart creations, including the area chart, heat map, and pie chart. We also explain how we can transform one type of chart into other by taking the examples of the area chart, line chart, and bar chart in the same way that we can change a pie chart into a donut, or vice versa. After that, we delve into data tables, by means of which we can generate tabular visualizations of data in which we can add additional metrics columns, along with actual data columns. We then cover metric-type visualizations, where we can display some metric values and tag clouds, which can be used to display word clouds with a link to filter out the data accordingly.

Chapter 5, X-Pack with Machine Learning, explains how X-Pack adds additional features to the existing Elastic Stack setup. We begin with an introduction to X-Pack, followed by the X-Pack installation process. We then delve into the different features of XPack, such as security, by means of which we can secure our Elastic Stack. As regards security, we cover user and role management by creating users, and roles, and then assign roles to the users. Following on from security, we cover monitoring, from the perspective of both an overview and a detailed view, where we can see the search and indexing rate. We then cover alerting, where we configure watch to send alert notifications by email. Following on from alerting, we cover reporting, by means of which we can generate CSV or PDF reports and download them. Finally, we cover ML, by means of which we create single- and multi-metric jobs and analyze the data by finding the anomaly and predicting future trends.

Chapter 6, Monitoring Applications with APM, covers Elastic APM and explains how we can monitor an application. We begin with APM components, which are APM Agents, APM Servers, Elasticsearch, and Kibana. After that, we delve into each of them in detail. APM Agents are open source libraries that can be configured in any of the supported language/libraries. Currently, we have support for Django and flask frameworks for Python, Java, Go, Node.js, Rails, Rack, RUM - JS, and Go. We can configure them to send application metrics and errors to the APM Server. We then cover the APM Server, which is again an open source software written in Go. The principal task of the APM Server is to receive data from different APM Agents and send it to Elasticsearch Cluster. Elasticsearch takes the APM data, which can be viewed, searched, or analyzed in Elasticsearch. Once data is pushed in Elasticsearch, we can display it in Kibana using a dedicated APM UI or through the Kibana Dashboard.

Chapter 7, Kibana Advanced Tools, describes Timelion and Dev Tools, which are quite useful tools in Kibana. We begin with an introduction to Timelion, and then different functions that are available in Timelion, such as the .es() function to set the Elasticsearch data source, and its different parameters, such as index, metric, split, offset, fit, and time field. We then cover other functions, such as .static(), to create static lines on the x-axis, the .points() function to convert the graph into a point display, the .color() function to change the color of the plot, the .derivetive() function to plot the difference in value over time, the .label() function to set the label for data series, the .range() function to limit the graph display between a particular min and max range, and finally the .holt() function to forecast the future trend or to ascertain the anomaly in the data. For a complete reference of functions, we can refer to the help section in Timelion. We then cover the use cases of Timelion. After Timelion, we describe Dev Tools, by means of which we can do multiple things. After the introduction to Dev Tools, we cover different Dev Tools options, including Console, by means of which we can execute Elasticsearch queries and can get the response on the same page. We then examine the Search Profiler, through which we can profile any Elasticsearch query by getting the details of the query components. Finally, we look at Grok Debugger, where we can create the Grok Pattern to parse sample data, thereby enabling the unstructured sample data to be converted into structured data. This structured data can then be used for data analysis or visualization and suchlike.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image