Responsibilities of a Hadoop administrator
With the increase in the interest to derive insight on their big data, organizations are now planning and building their big data teams aggressively. To start working on their data, they need to have a good solid infrastructure. Once they have this setup, they need several controls and system policies in place to maintain, manage, and troubleshoot their cluster.
There is an ever-increasing demand for Hadoop Administrators in the market as their function (setting up and maintaining Hadoop clusters) is what makes analysis really possible.
The Hadoop administrator needs to be very good at system operations, networking, operating systems, and storage. They need to have a strong knowledge of computer hardware and their operations, in a complex network.
Apache Hadoop, mainly, runs on Linux. So having good Linux skills such as monitoring, troubleshooting, configuration, and security is a must.
Setting up nodes for clusters involves a lot of repetitive tasks and the Hadoop administrator should use quicker and efficient ways to bring up these servers using configuration management tools such as Puppet, Chef, and CFEngine. Apart from these tools, the administrator should also have good capacity planning skills to design and plan clusters.
There are several nodes in a cluster that would need duplication of data, for example, the fsimage
file of the namenode daemon can be configured to write to two different disks on the same node or on a disk on a different node. An understanding of NFS mount points and how to set it up within a cluster is required. The administrator may also be asked to set up RAID for disks on specific nodes.
As all Hadoop services/daemons are built on Java, a basic knowledge of the JVM along with the ability to understand Java exceptions would be very useful. This helps administrators identify issues quickly.
The Hadoop administrator should possess the skills to benchmark the cluster to test performance under high traffic scenarios.
Clusters are prone to failures as they are up all the time and are processing large amounts of data regularly. To monitor the health of the cluster, the administrator should deploy monitoring tools such as Nagios and Ganglia and should configure alerts and monitors for critical nodes of the cluster to foresee issues before they occur.
Knowledge of a good scripting language such as Python, Ruby, or Shell would greatly help the function of an administrator. Often, administrators are asked to set up some kind of a scheduled file staging from an external source to HDFS. The scripting skills help them execute these requests by building scripts and automating them.
Above all, the Hadoop administrator should have a very good understanding of the Apache Hadoop architecture and its inner workings.
The following are some of the key Hadoop-related operations that the Hadoop administrator should know:
- Planning the cluster, deciding on the number of nodes based on the estimated amount of data the cluster is going to serve.
- Installing and upgrading Apache Hadoop on a cluster.
- Configuring and tuning Hadoop using the various configuration files available within Hadoop.
- An understanding of all the Hadoop daemons along with their roles and responsibilities in the cluster.
- The administrator should know how to read and interpret Hadoop logs.
- Adding and removing nodes in the cluster.
- Rebalancing nodes in the cluster.
- Employ security using an authentication and authorization system such as Kerberos.
- Almost all organizations follow the policy of backing up their data and it is the responsibility of the administrator to perform this activity. So, an administrator should be well versed with backups and recovery operations of servers.