Hadoop had MapReduce as a processing engine when it first started and Java was the primary language that was used for writing MapReduce jobs. Since Hadoop was mostly used as an analytics processing framework, large chunks of use cases involved data mining on legacy data warehouses. These data warehouse applications were migrated to use Hadoop. Most users using legacy data warehouses had SQL and that was their core expertise. Learning a new programming language was time-consuming. Therefore, it is better to have a framework that can help SQL skilled people to write MapReduce jobs in an SQL-like language. Apache Pig was invented for this purpose. It also solved the complexity of writing multiple MapReduce pipeline jobs where output of one job becomes the input to another. the
Pig
Apache Pig is a distributed processing tool that is an abstraction over MapReduce and is used...