Introduction
As we know, HBase is very effective in enabling real-time platforms to access read/write data randomly from the disk with commodity hardware, and there are many ways to do that, such as the following:
- Put APIs
- BulkLoad Tool
- MapReduce jobs
Put APIs are the most straightforward way to place data into the HBase system, but they are only good for small sets of data and can be used for site-facing applications or for more real-time scenarios/use cases.
BulkLoad Tool runs the MapReduce job behind the scenes and loads data into HBase tables. These tools internally generate the HBase internal file format (HFile), which allows us to import the data into a live HBase cluster.
Note
In case of huge data or a very high write-intensive job, it's advisable to use the ImportTsv tool. Using MapReduce jobs in conjunction with HFileOutputFormat is acceptable; but as the data grows, it loses its performance, scalability, and maintainability, which are necessary for any software to be successful...