Getting data out of Hadoop
We said that the data flow between Hadoop and a relational database is rarely a linear single direction process. Indeed the situation where data is processed within Hadoop and then inserted into a relational database is arguably the more common case. We will explore this now.
Writing data from within the reducer
Thinking about how to copy the output of a MapReduce job into a relational database, we find similar considerations as when looking at the question of data import into Hadoop.
The obvious approach is to modify a reducer to generate the output for each key and its associated values and then to directly insert them into a database via JDBC. We do not have to worry about source column partitioning, as with the import case, but do still need to think about how much load we are placing on the database and whether we need to consider timeouts for long-running tasks. In addition, just as with the mapper situation, this approach tends to perform many single queries...