Hive load data into HDFS - hadoop

i have data set having 100+ columns for each row. question is how can i load selected columns using hive into hdfs.
for example : col1 ,col2,col3...col50,col51....col99,col100 . I need to load only selected columns col1,col2,col34 and col99.
Approach 1:
1. load all the columns
2. and create view based on selected columns.
Approach 1 - cons- i need to load all the columns unnecessary and it will consume more memory in hdfs also i need to write big query for specifying the column
. Any other best approach.

Hive provides a tabular view on top of HDFS data. If your data is in HDFS, then you can create an external table on it to reference the existing data. You will need to put a schema over the data. This is a one time effort and then you can use all the features of Hive to explore and analyze the dataset. Hive supports views also.
Illustration
Sample data file: data.csv
1,col_1a,col1b
2,col_2a,col2b
3,col_3a,col3b
4,col_4a,col4b
5,col_5a,col5b
6,col_6a,col6b
7,col_7a,col7b
Load and verify data in HDFS
hadoop fs -mkdir /hive-data/mydata
hadoop fs -put data.csv /hive-data/mydata
hadoop fs -cat /hive-data/mydata/*
1,col_1a,col1b
2,col_2a,col2b
3,col_3a,col3b
4,col_4a,col4b
5,col_5a,col5b
6,col_6a,col6b
7,col_7a,col7b
Create a Hive table on top of the HDFS data in default database
CREATE EXTERNAL TABLE default.mydata
(
id int,
data_col1 string,
data_col2 string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 'hdfs:///hive-data/mydata';
Query the Hive table
select * from default.mydata;
mydata.id mydata.data_col1 mydata.data_col2
1 col_1a col1b
2 col_2a col2b
3 col_3a col3b
4 col_4a col4b
5 col_5a col5b
6 col_6a col6b
7 col_7a col7b

Related

Confusion with the external tables in hive

I have created the hive external table using below command:
use hive2;
create external table depTable (depId int comment 'This is the unique id for each dep', depName string,location string) comment 'department table' row format delimited fields terminated by ","
stored as textfile location '/dataDir/';
Now, when I view the HDFS I can see the db but there is no depTable inside the warehouse.
[cloudera#quickstart ~]$ hadoop fs -ls /user/hive/warehouse/hive2.db
[cloudera#quickstart ~]$
Above you can see that there is no table created in this DB. As far as I know, external tables are not stored in the hive warehouse.So am I correct ?? If yes then where is it stored ??
But if I create external table first and then load the data then I am able to see the file inside hive2.db.
hive> create external table depTable (depId int comment 'This is the unique id for each dep', depName string,location string) comment 'department table' row format delimited fields terminated by "," stored as textfile;
OK
Time taken: 0.056 seconds
hive> load data inpath '/dataDir/department_data.txt' into table depTable;
Loading data to table default.deptable
Table default.deptable stats: [numFiles=1, totalSize=90]
OK
Time taken: 0.28 seconds
hive> select * from deptable;
OK
1001 FINANCE SYDNEY
2001 AUDIT MELBOURNE
3001 MARKETING PERTH
4001 PRODUCTION BRISBANE
Now, if I fire the hadoop fs query I can see this table under database as below:
[cloudera#quickstart ~]$ hadoop fs -ls /user/hive/warehouse/hive2.db
Found 1 items
drwxrwxrwx - cloudera supergroup 0 2019-01-17 09:07 /user/hive/warehouse/hive2.db/deptable
If I delete the table still I am able to see table in the HDFS as below:
[cloudera#quickstart ~]$ hadoop fs -ls /user/hive/warehouse/hive2.db
Found 1 items
drwxrwxrwx - cloudera supergroup 0 2019-01-17 09:11 /user/hive/warehouse/hive2.db/deptable
So, what is the exact behavior of the external tables ?? When I create using LOCATION keyword where does it get stored and when I create using load statement why it is getting stored in the HDFS and after deleting why it doesn't get deleted.
The main difference between EXTERNAL and MANAGED tables is in Drop table/partition behavior.
When you drop MANAGED table/partition, the location with data files also removed.
When you drop EXTERNAL table, the location with data files remains as is.
UPDATE: TBLPROPERTIES ("external.table.purge"="true") in release 4.0.0+ (HIVE-19981) when set on external table would delete the data as well.
EXTERNAL table as well as MANAGED is being stored in the location specified in DDL. You can create table on top of existing location with data files already in the location and it will work for both EXTERNAL or MANAGED, does not matter.
You even can create both EXTERNAL and MANAGED tables on top of the same location, see this answer with more details and tests: https://stackoverflow.com/a/54038932/2700344
If you specified location, the data will be stored in that location for both types of tables. If you did not specify location, the data will be in default location: /user/hive/warehouse/database_name.db/table_name for both managed and external tables.
Update: Also there can be some restrictions on location depending on platform/vendor, see https://stackoverflow.com/a/67073849/2700344, you may not be allowed to create manged/external tables outside their default allowed root location.
See also official Hive docs on Managed vs External Tables

how to validate a data transfer from an external database (oracle) to hdfs

I have a job that transfers data from oracle to hdfs. I need an efficient way to validate this transfer, to make sure that the all the rows are properly transferred.
A simple way I feel is to take the count of rows from Source Oracle table
select count(*) from tablename;
You will get the count of rows from the Oracle table
From HDFS point of View
Count total number of lines(rows) in HDFS file:
hadoop fs -cat /yourdestinationhdfsfiles/* | wc -l
Data Validation strategy
Create a (Temp) Hive table similar to Oracle table structure
Take few records from the Target HDFS file and load the data into HIVE table and validate if records and structure are matching.[Manual process for validation]
Note: This can be done for full data also provided you have enough storage space and processing unit.
Hope this Helps!!!..

how to preprocess the data and load into hive

I completed my hadoop course now I want to work on Hadoop. I want to know the workflow from data ingestion to visualize the data.
I am aware of how eco system components work and I have built hadoop cluster with 8 datanodes and 1 namenode:
1 namenode --Resourcemanager,Namenode,secondarynamenode,hive
8 datanodes--datanode,Nodemanager
I want to know the following things:
I got data .tar structured files and first 4 lines have got description.how to process this type of data im little bit confused.
1.a Can I directly process the data as these are tar files.if its yes how to remove the data in the first four lines should I need to untar and remove the first 4 lines
1.b and I want to process this data using hive.
Please suggest me how to do that.
Thanks in advance.
Can I directly process the data as these are tar files.
Yes, see the below solution.
if yes, how to remove the data in the first four lines
Starting Hive v0.13.0, There is a table property, tblproperties ("skip.header.line.count"="1") while creating a table to tell Hive the number of rows to ignore. To ignore first four lines - tblproperties ("skip.header.line.count"="4")
CREATE TABLE raw (line STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
CREATE TABLE raw_sequence (line STRING)
STORED AS SEQUENCEFILE
tblproperties("skip.header.line.count"="4");
LOAD DATA LOCAL INPATH '/tmp/test.tar' INTO TABLE raw;
SET hive.exec.compress.output=true;
SET io.seqfile.compression.type=BLOCK; -- NONE/RECORD/BLOCK (see below)
INSERT OVERWRITE TABLE raw_sequence SELECT * FROM raw;
To view the data:
select * from raw_sequence
Reference: Compressed Data Storage
Follow the below steps to achieve your goal:
Copy the data(ie.tar file) to the client system where hadoop is installed.
Untar the file and manually remove the description and save it in local.
Create the metadata(i.e table) in hive based on the description.
Eg: If the description contains emp_id,emp_no,etc.,then create table in hive using this information and also make note of field separator used in the data file and use the corresponding field separator in create table query. Assumed that file contains two columns which is separated by comma then below is the syntax to create the table in hive.
Create table tablename (emp_id int, emp_no int)
Row Format Delimited
Fields Terminated by ','
Since, data is in structured format, you can load the data into hive table using the below command.
LOAD DATA LOCAL INPATH '/LOCALFILEPATH' INTO TABLE TABLENAME.
Now, local data will be moved to hdfs and loaded into hive table.
Finally, you can query the hive table using SELECT * FROM TABLENAME;

Data in HDFS files not seen under hive table

I have to create a hive table from data present in oracle tables.
I'm doing a sqoop, thereby converting the oracle data into HDFS files. Then I'm creating a hive table on the HDFS files.
The sqoop completes successfully and the files also get generated in the HDFS target directory.
Then I run the create table script in hive. The tables gets created. But it is an empty table, no data is seen in the hive table.
Has anyone faced a similar problem?
Hive default delimiter is ctrlA, if you don't specify any delimiter it will take default delimiter. Add below line in your hive script .
row format delimited fields terminated by '\t'
Your Hive script and your expectation is wrong. You are trying to create a partitioned table on the data that you have already imported, partitions won't work that way. If your query has no partition in it then you can able to see data.
Basically If you want partitioned table , you can't create on the under lying data like you have tried above. If you want hive partition load the data from intermediate table or that sqoop directory to your partitioned table to get Hive partitions.

How to Load Data into Hive from HDFS

I am Trying to load data into hive from HDFS . But I Observed that data is moving , meaning after loading the data into hive environment if i look at the HDFS the data which i have loaded is not present . can You Please answer this question with example .
If you would like to create a table in Hive from data in HDFS without moving the data into /user/hive/warehouse/, you should use the optional EXTERNAL and LOCATION keywords. For example, from this page, we have the following example CREATE TABLE statement:
hive> CREATE EXTERNAL TABLE userline(line STRING) ROW FORMAT
DELIMITED FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/home/admin/userdata';
Without those, Hive will take your data from HDFS and load it into /user/hive/warehouse (and if the table is dropped, the data is also deleted).

Resources