Preparing Elastic Stack pipeline
We have a scenario for which we want to do data and log analysis covered in the preceding section. To do analysis, we want to use Elastic Stack components to help us. Those components will be installed on different nodes and will be submitting data to one central Elasticsearch node or to an Elasticsearch cluster. In order to set this up, we need to update our architecture to include elastic stack components, such as Logstash, Beats, Elasticsearch, and Kibana.
What to capture?
First thing, before we start updating our architecture, we need to understand what we want to capture and how that is going to help us. The following are few things we want to capture for our requirements:
- Logs generated by the following:
- Liferay, MySQL
- Nginx
- OpenDJ
- Elasticsearch node, which is used by Liferay
- System statistics for each node:
- All nodes for Liferay, MySQL, Nginx, OpenDJ, and Elasticsearch
- Network Traffic for each node:
- Includes HTTP, MySQL protocols, and so on