Summary
We have covered a lot of ground in this chapter and we now have the foundation to explore MapReduce in more detail. Specifically, we learned how key/value pairs is a broadly applicable data model that is well suited to MapReduce processing. We also learned how to write mapper and reducer implementations using the 0.20 and above versions of the Java API.
We then moved on and saw how a MapReduce job is processed and how the map
and reduce
methods are tied together by significant coordination and task-scheduling machinery. We also saw how certain MapReduce jobs require specialization in the form of a custom partitioner or combiner.
We also learned how Hadoop reads data to and from the filesystem. It uses the concept of InputFormat
and OutputFormat
to handle the file as a whole and RecordReader
and RecordWriter
to translate the format to and from key/value pairs.
With this knowledge, we will now move on to a case study in the next chapter, which demonstrates the ongoing development and...