Hadoop best practices
In this section, we will cover some of the common best practices for the Hadoop cluster in terms of log management and troubleshooting tools.
These are not from a tuning perspective, but to make things easier to troubleshoot and diagnose.
Things to keep in mind:
- Always enable logs for each daemon that runs in the Hadoop cluster. Keep the logging level to INFO and, when needed, change it to DEBUG. Once the troubleshooting is done, revert to level INFO.
- Implement log rotation and retention polices to manage the logs.
- Use tools such as Nagios to alert for any errors in the cluster before it becomes an issue.
- Use log aggregation and analysis tools such as Splunk to parse logs.
- Never co-locate the logs disk with other data disks in the cluster.
- Use central configuration management systems such as Puppet or Chef to maintain consistent configuration across the cluster.
- Schedule a benchmarking job to run every day on the cluster and proactively predict any bottlenecks. This can be...