How to use DistCp to directly convert data into tables in Hive? - hadoop

I am using DistCp to copy the data from cluster 1 to cluster 2. I was successfully able to copy the table data from cluster 1 into cluster 2. However, using the hdfs, the data has been sent to file browser.
Is there any direct way to convert this hdfs data into a Hive table (including data type, delimeters ...etc) by use of DistCp command(s)? I can certainly query it to gather the data from hdfs, however I'll have to convert them one-by-one. Trying to look for efficient way to this. Thanks!
Example:
hadoop distcp hdfs://nn1:8020/source/a hdfs://nn1:8020/source/b hdfs://nn2:8020/destination

Haven't found a documentation where you can directly use DistCp to copy tables. However, if any one is looking for similar situation, they can use. Worked for me.
--hive
export table <<<table_name>>> to '<<<hdfs path>>>';
#bash/shell
hadoop distcp source desitination
--hive
import table <<<table_name>> from '<<<hdfs>>>';

Related

Can we use Sqoop to move any structured data file apart from moving data from RDBMS?

This question was asked to me in a recent interview.
As per my knowledge we can use Sqoop to transfer data between RDBMS and hadoop ecosystems(hdfs, hive,pig,hbase).
Can someone please help me in finding answer?
As per my understanding, Sqoop can't move any structured data file (like CSV) to HDFS or other Hadoop ecosystem component like Hive, HBase, etc.
Why would you use Sqoop for this?
You can simply put any data file directly into HDFS using it's REST, Web or Java API.
Sqoop is not meant for this type of use case.
Main purpose of sqoop import is to fetch data from RDBMS in parallel.
Apart from that, Sqoop has Sqoop Import Mainframe.
The import-mainframe tool imports all sequential datasets in a partitioned dataset(PDS) on a mainframe to HDFS. A PDS is akin to a directory on the open systems. The records in a dataset can contain only character data. Records will be stored with the entire record as a single text field.

Save and access table-like data structure in hadoop

I want to save and access a table like data structure in HDFS with MapReduce programming. Part of this DS is shown in the following picture. This DS have tens of thousands of columns and hundreds of rows and All nodes should have access to it.
My Question is: How can I save this DS in HDFS and access it with MapReduce programming. Should I use arrays? (Or Hive tables ? Or Hbase?)
Thank you.
HDFS is distributed file System which stores your big files in distributed servers.
You can copy your files from local system to HDFS using command
hadoop fs -copyFromLocal /source/local/path destincation/hdfs/path
Once copy completed an External hive table can be formed on destincation/hdfs/path.
This table can be queried using hive shell.
Do consider Hive for this scenario. If you want to do table type of processing like SAS dataset or R dataframe/dataTable or python pandas; almost always an equivalent thing is possible in SQL. Hive provides powerful SQL abstraction through MapReduce and Tez engines. If you want to graduate to Spark sometime then you can read Hive tables in dataframes. As #sumit pointed you just need to transfer your data from local to HDFS (using HDFS copyFromLocal or put command) and define an external Hive table on that.
If in case you want to write some custom map-reduce on this data then access the background hive table data (more likely at /user/hive/warehouse). After reading the data from stdin, parse it in mapper (separator could be find using describe extended <hive_table>) and emit in key-value pair format.

Hdfs and Hbase: how it works?

Hi everybody
I'm quite new with bigdata, I have installed a HDFS + Hbase test database and I use Talend Big Data (an ETL) to make my test.
I would like to know : if I put a file directly in the HDFS, without going via hbase, I could never request these data ? I mean, I have to read the entire file if I want to filter data I want to chose, is that right ?
Thanks a lot for any help !
HDFS is just a distributed file system, you cannot query your files without passing by an intermidiate component.
Hbase is a nosql database that persist your data on the HDFS, use it when you need a random access to your data.
If you want to store your files on the HDFS as they are and query them, you can create an external table upon them using Hive.
The best option is to use hive on the top of the files which are on the HDFS. You can use bucketing and partitioning in the hive for performance improvement.

Moving hive data from one Hadoop cluster to another without using distcp command?

How to move hive data from one Hadoop cluster to another Hadoop cluster without using distcp command. As we can not use this. Do we have another option like Sqoop or Flume?
distcp is the efficient way to move huge amounts of data from one hadoop cluster to another.
Sqoop and Flume cannot be used to transfer data from one hadoop cluster to another. Sqoop is predominantly used to move data between hadoop and relational databases whereas Flume is used to ingest streaming data to Hadoop.
Your other option would be to use:
high-throughput msg queue like Kafka, but this would become more complicated than using distcp.
Use traditional hadoop fs shell commands like cp or get followed by put
FYI when you are talking about Hive data, you also should consider keeping hive metadata (metastore) in-sync between the clusters.

Data moving from RDBMS to Hadoop, using SQOOP and FLUME

I am in the process of learning Hadoop and stuck with few concepts on moving data from Relational database to Hadoop and vice versa.
I have transferred files from MySQL to HDFS using SQOOP import queries. The files I transferred were structured datasets and not any server log data. I recently read that we usually use flume for moving log files into Hadoop,
My question is:
1. Can we use SQOOP as well for moving log files?
2. If yes, which of SQOOP or FLUME is more preferred for log files and why?
1) Sqoop can be used to transfer data between any rdbms and hdfs. To use scoop the data has to be structured usually specified by schema of database from where data is being imported or exported.Log files are not always structured,depending on source and type of log so sqoop is not used for moving log files.
2)Flume can collect, aggregate data from many different kinds of customizable data sources. It gives more flexibility in controlling what specific events to capture and use in user defined work flow before storing in say hdfs.
I hope it clarified difference between sqoop and flume.
SQOOP is designed to transfer data from RDMS to HDFS whereas FLUME is for moving large amounts of log data.
Both are different and specialized for different purposes.
Like
You can use SQOOP to import data via JDBC ( which you can not do in FLUME ),
and
You can use FLUME to say something like "I want to tail 200 lines of log file from this server".
Read more about FLUME here
http://flume.apache.org/
SQOOP not only transfers data from RDBMS but also from NOSql databases like MongoDB. You can directly transfer data to HDFS or Hive.
Transferring data to Hive you need not have to create table beforehand.. It takes the scheme from database itself.
Flume is used to fetch log data or streaming data

Resources