Using Sqoop
Sqoop provides an excellent way to import data in parallel from existing RDBMs to HDFS. We get an exact set of table structures that are imported. This happens because of parallel processing. These files can have text delimited by ',' '|', and so on. After manipulating imported records by using MapReduce or Hive, the output result set can be exported back to RDBMS. The data imported can be done in real time or in the batch process (using a cron job).
Getting ready
Prerequisites:
HBase and Hadoop cluster must be up and running.
You can do a wget to http://mirrors.gigenet.com/apache/sqoop/1.4.6/sqoop-1.4.6.tar.gz
Untar it to /u/HbaseB
using tar –zxvf sqoop-1.4.6.tar.gz
It will create a /u/HbaseB/sqoop-1.4.6 folder.
A Sqoop user is created in the target DB, which has read/write access and is not bound strictly with CPU and memory (RAM, Storage) limitation by the DBAs.
How to do it…
- Log in to MySQL by executing the following command:
Mysql –h yourMySqlHostName...