Modern operating systems can handle many concurrent processes and fully utilize all processor cores. When it comes to distributed computing, a larger task is typically broken down into smaller ones such that multiple processes can execute the tasks concurrently. Sometimes, the results of these individual executions may need to be combined or aggregated for final delivery. This process is called reduction.
This concept is reincarnated in various forms. For example, in functional programming, it is common to implement data processes using map-reduce. The mapping process takes a list and applies a function to each element, and the reduction process combines the results. In big data processing, Hadoop uses a similar form of map-reduce, except that it runs across multiple machines in a cluster. The DataFrames package contains functions that perform...