How to import/export hbase data via hdfs (hadoop commands) - hadoop

I have saved my crawled data by nutch in Hbase whose file system is hdfs. Then I copied my data (One table of hbase) from hdfs directly to some local directory by command
hadoop fs -CopyToLocal /hbase/input ~/Documents/output
After that, I copied that data back to another hbase (other system) by following command
hadoop fs -CopyFromLocal ~/Documents/input /hbase/mydata
It is saved in hdfs and when I use list command in hbase shell, it shows it as another table i.e 'mydata' but when I run scan command, it says there is no table with 'mydata' name.
What is problem with above procedure?
In simple words:
I want to copy hbase table to my local file system by using a hadoop command
Then, I want to save it directly in hdfs in another system by hadoop command
Finally, I want the table to be appeared in hbase and display its data as the original table

If you want to export the table from one hbase cluster and import it to another, use any one of the following method:
Using Hadoop
Export
$ bin/hadoop jar <path/to/hbase-{version}.jar> export \
<tablename> <outputdir> [<versions> [<starttime> [<endtime>]]
NOTE: Copy the output directory in hdfs from the source to destination cluster
Import
$ bin/hadoop jar <path/to/hbase-{version}.jar> import <tablename> <inputdir>
Note: Both outputdir and inputdir are in hdfs.
Using Hbase
Export
$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export \
<tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
Copy the output directory in hdfs from the source to destination cluster
Import
$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>
Reference: Hbase tool to export and import

If you can use the Hbase command instead to backup hbase tables you can use the Hbase ExportSnapshot Tool which copies the hfiles,logs and snapshot metadata to other filesystem(local/hdfs/s3) using a map reduce job.
Take snapshot of the table
$ ./bin/hbase shell
hbase> snapshot 'myTable', 'myTableSnapshot-122112'
Export to the required file system
$ ./bin/hbase class org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot MySnapshot -copy-to fs://path_to_your_directory
You can export it back from the local file system to hdfs:///srv2:8082/hbase and run the restore command from hbase shell to recover the table from the snapshot.
$ ./bin/hbase shell
hbase> disable 'myTable'
hbase> restore_snapshot 'myTableSnapshot-122112'
Reference:Hbase Snapshots

Related

How can I solve the error "file:/user/hive/warehouse/records is not a directory or unable to create one"?

hive> CREATE TABLE records (year STRING, temperature INT, quality INT)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t';
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:file:/user/hive/warehouse/records is not a directory or unable to create one)
How can I solve the error?
where is /user/hive/warehouse/ located? In my local ext4 filesystems under Ubuntu, there is no /user/hive/warehouse/ such a path.
How can I get information about i.e. examine /user/hive/warehouse/?
You should create /user/hive/warehouse folder in hdfs file system before running hive commands.
Hive internally uses hadoop hdfs file system to store database data. You can check the hdfs directory path in hive-default.xml and/or hive-site.xml configuration file or in hive terminal, using below command
hive> set hive.metastore.warehouse.dir;
As mentioned Hive uses Hadoop, so
Hadoop must be installed and running status
HADOOP_HOME environment variable must be set
export HADOOP_HOME=hadoop-install-dir
export PATH=$PATH:$HADOOP_HOME/bin
Directories in hdfs file system must be created and given access to hive
hadoop fs -mkdir -p /tmp
hadoop fs -mkdir -p /user/hive/warehouse
hadoop fs -chmod g+w /tmp
hadoop fs -chmod g+w /user/hive/warehouse
To list directories in hdfs file system
hadoop fs -ls /user
hadoop fs -ls /
hadoop fs -ls /user/hive/
Hive Wiki page

Hive move & restore some partitions

I have a 50TB managed partitioned (by date) Hive table, from which I want to move some old partitions to an external HDD, in order to restore them back later if required.
The scripts are as follows:
Move out:
$ hdfs dfs -get ${HIVE_WAREHOUSE_TABLE_PATH}/ingest_date=2016-01-01 ${LOCAL_TABLE_PATH}/2016-01-01
$ hdfs dfs -rm -r -skipTrash ${HIVE_WAREHOUSE_TABLE_PATH}/ingest_date=2016-01-01
$ hive -e "ALTER TABLE ${TABLE} DROP IF EXISTS PARTITION (ingestion_date='2016-01-01') PURGE;"
Restore:
$ hdfs dfs -put ${LOCAL_TABLE_PATH}/2016-01-01 ${HIVE_WAREHOUSE_TABLE_PATH}/ingest_date=2016-01-01
$ hive -e "ALTER TABLE ${TABLE} ADD PARTITION (ingest_date='2016-01-01') LOCATION ${HIVE_WAREHOUSE_TABLE_PATH}/ingest_date=2016-01-01;"
Do I miss something in the above strategy?
I have tried:
$ hive --hivevar local_path=${LOCAL_TABLE_PATH} -e "EXPORT TABLE myDatabase.theTable PARTITION (ingest_date='2016-01-01') to '${local_path}/2016-01-01';"
but this takes too long to CopyTable for 1 year partitions, and I am trying to avoid this.
Thank you,
Gee

Oozie iterative workflow

I am building an application to ingest data from MYSQL DB to hive tables. App will be scheduled to execute every day.
The very first action is to read a Hive table to load import table info e.g name, type etc., and create a list of tables in a file to import. Next a Sqoop action to transfer data for each table in sequence.
Is it possible to create a shell script Oozie action which will iterate through the table list and launch oozie sub-workflow Sqoop action for each table in sequence? Could you provide some reference? Also any suggestion of a better approach!
I have come up with following shell script containing Sqoop action. It works fine with some environment variable tweaking.
hdfs_path='hdfs://quickstart.cloudera:8020/user/cloudera/workflow/table_metadata' table_temp_path='hdfs://quickstart.cloudera:8020/user/cloudera/workflow/hive_temp
if $(hadoop fs -test -e $hdfs_path)
then
for file in $(hadoop fs -ls $hdfs_path | grep -o -e "$hdfs_path/*.*");
do
echo ${file}
TABLENAME=$(hadoop fs -cat ${file});
echo $TABLENAME
HDFSPATH=$table_temp_path
sqoop import --connect jdbc:mysql://quickstart.cloudera:3306/retail_db --table departments --username=retail_dba --password=cloudera --direct -m 1 --delete-target-dir --target-dir $table_temp_path/$TABLENAME
done
fi

Pig command to copy to HDFS from local FS of master node

I have this pig command executed through oozie:
fs -put -f /home/test/finalreports/accountReport.csv /user/hue/intermediateBingReports
/home/test/finalreports/accountReport.csv is created on local filesystem of only one of the hdfs nodes. I recently added a new HDFS node and this command fails on that hdfs node since /home/test/finalreports/accountReport.csv doesn't exist there.
What is the way to go for this?
I came across this but it doesn't seem to work for me:
Tried the following command:
hadoop fs -fs masternode:8020 -put /home/test/finalreports/accountReport.csv hadoopFolderName/
I get:
put: `/home/test/finalreports/accountReport.csv': No such file or directory

Getting data in and out of Elastic MapReduce HDFS

I've written a Hadoop program which requires a certain layout within HDFS, and which afterwards, I need to get the files out of HDFS. It works on my single-node Hadoop setup and I'm eager to get it working on 10's of nodes within Elastic MapReduce.
What I've been doing is something like this:
./elastic-mapreduce --create --alive
JOBID="j-XXX" # output from creation
./elastic-mapreduce -j $JOBID --ssh "hadoop fs -cp s3://bucket-id/XXX /XXX"
./elastic-mapreduce -j $JOBID --jar s3://bucket-id/jars/hdeploy.jar --main-class com.ranjan.HadoopMain --arg /XXX
This is asynchronous, but when the job's completed, I can do this
./elastic-mapreduce -j $JOBID --ssh "hadoop fs -cp /XXX s3://bucket-id/XXX-output"
./elastic-mapreduce -j $JOBID --terminate
So while this sort-of works, but it's clunky and not what I'd like. Is there cleaner way to do this?
Thanks!
You can use distcp which will copy the files as a mapreduce job
# download from s3
$ hadoop distcp s3://bucket/path/on/s3/ /target/path/on/hdfs/
# upload to s3
$ hadoop distcp /source/path/on/hdfs/ s3://bucket/path/on/s3/
This makes use of your entire cluster to copy in parallel from s3.
(note: the trailing slashes on each path are important to copy from directory to directory)
#mat-kelcey, does the command distcp expect the files in S3 to have a minimum permission level? For some reason I have to set permission levels of the files to "Open/Download" and "View Permissions" for "Everyone", for the files to be able accessible from within the bootstrap or the step scripts.

Resources