How to force CTAS to generate a single file? - hadoop

I'm using HDP 2.5 with hive service. When i create hive table by using below query;
create table Sample_table
row format delimited
fields terminated by '|'
stored as textfile
AS
select *
from sample_table_unique
where state='AL';
Either i can able to create external table with specific location.
My question is when i create table/external table the stored file has been splitted ie. like below wise files has been splitted.
/apps/hive/warehouse/sampledb/sample_table:
00000_0,
00001_0,
00002_0,
00003_0,
I don't want those splitted file, i want one merged file like 00000_0. I don't know how it happen.Please tell me how do i resolve this issue.

The SELECT statement runs a mapper/mapreduce (depends on the select query) job to write data into the target table sample_table from the source table sample_table_unique.
Based on the number of tasks, the number of files generated may vary.
To merge them into one, you can set these properties either for the session on permanently in hive-site.xml
hive> SET hive.merge.mapfiles=true;
hive> SET hive.merge.mapredfiles=true;
hive> SET hive.merge.smallfiles.avgsize=16000000;
hive> SET hive.merge.size.per.task=256000000;
In case of TEZ execution engine, use
hive> SET hive.merge.tezfiles=true;
instead of mapfiles and mapredfiles.
When the average output file size of a job is less than this hive.merge.smallfiles.avgsize number, Hive will start an additional map-reduce job to merge the output files into bigger files.
The values for hive.merge.smallfiles.avgsize and hive.merge.size.per.task are default ones, change them accordingly to the input size.

Related

How to merge small files created by hive while inserting data into buckets?

I have a hive table which contains call data records(CDRs). I have the table partitioned on the phone number and bucketed on call_date. Now when I am inserting data into hive the back dated call_date are creating small files in my buckets which is creating name node metadata increase and performance slowdown.
Is there a way to merge these small files into one.
One way to control the size of files when inserting into a table using Hive, is to set the below parameters:
set hive.merge.tezfiles=true;
set hive.merge.mapfiles=true;
set hive.merge.mapredfiles=true;
set hive.merge.size.per.task=128000000;
set hive.merge.smallfiles.avgsize=128000000;
This will work for both M/R and Tez engine and will ensure that all files created are at or below 128 MB in size (you can alter that size number according to your use case. Additional reading here: https://community.cloudera.com/t5/Community-Articles/ORC-Creation-Best-Practices/ta-p/248963).
The easiest way to merge the files of the table is to remake it, while having ran the above hive commands at runtime:
CREATE TABLE new_table LIKE old_table;
INSERT INTO new_table select * from old_table;
In your case, for ORC tables you can concatenate the files after creation:
ALTER TABLE table_name [PARTITION (partition_key = 'partition_value')] CONCATENATE;

Is there a way to prevent a Hive table from being overwritten if the SELECT query of the INSERT OVERWRITE does not return any results

I am developing a batch job that loads data into Hive tables from HDFS files. The flow of data is as follows
Read the file received in HDFS using an external Hive table
INSERT OVERWRITE the final hive table from the external Hive table applying certain transformations
Move the received file to Archive
This flow works fine if there is a file in the input directory for the external table to read during step 1.
If there is no file, the external table will be empty and as a result executing step 2 will empty the final table. If the external table is empty, I would like to keep the existing data in the final table (the data loaded during the previous execution).
Is there a hive property that I can set so that the final table is overwritten only if we are overwriting it with some data?
I know that I can check if the input file exists using an HDFS command and conditionally launch the Hive requests. But I am wondering if I can achieve the same behavior directly in Hive which would help me avoid this extra verification
Try to add dummy partition to your table, say LOAD_TAG and use dynamic partition load:
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE your_table PARTITION(LOAD_TAG)
select
col1,
...
colN,
'dummy_value' as LOAD_TAG
from source_table;
The partition value should always be the same in your case.

how to preprocess the data and load into hive

I completed my hadoop course now I want to work on Hadoop. I want to know the workflow from data ingestion to visualize the data.
I am aware of how eco system components work and I have built hadoop cluster with 8 datanodes and 1 namenode:
1 namenode --Resourcemanager,Namenode,secondarynamenode,hive
8 datanodes--datanode,Nodemanager
I want to know the following things:
I got data .tar structured files and first 4 lines have got description.how to process this type of data im little bit confused.
1.a Can I directly process the data as these are tar files.if its yes how to remove the data in the first four lines should I need to untar and remove the first 4 lines
1.b and I want to process this data using hive.
Please suggest me how to do that.
Thanks in advance.
Can I directly process the data as these are tar files.
Yes, see the below solution.
if yes, how to remove the data in the first four lines
Starting Hive v0.13.0, There is a table property, tblproperties ("skip.header.line.count"="1") while creating a table to tell Hive the number of rows to ignore. To ignore first four lines - tblproperties ("skip.header.line.count"="4")
CREATE TABLE raw (line STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
CREATE TABLE raw_sequence (line STRING)
STORED AS SEQUENCEFILE
tblproperties("skip.header.line.count"="4");
LOAD DATA LOCAL INPATH '/tmp/test.tar' INTO TABLE raw;
SET hive.exec.compress.output=true;
SET io.seqfile.compression.type=BLOCK; -- NONE/RECORD/BLOCK (see below)
INSERT OVERWRITE TABLE raw_sequence SELECT * FROM raw;
To view the data:
select * from raw_sequence
Reference: Compressed Data Storage
Follow the below steps to achieve your goal:
Copy the data(ie.tar file) to the client system where hadoop is installed.
Untar the file and manually remove the description and save it in local.
Create the metadata(i.e table) in hive based on the description.
Eg: If the description contains emp_id,emp_no,etc.,then create table in hive using this information and also make note of field separator used in the data file and use the corresponding field separator in create table query. Assumed that file contains two columns which is separated by comma then below is the syntax to create the table in hive.
Create table tablename (emp_id int, emp_no int)
Row Format Delimited
Fields Terminated by ','
Since, data is in structured format, you can load the data into hive table using the below command.
LOAD DATA LOCAL INPATH '/LOCALFILEPATH' INTO TABLE TABLENAME.
Now, local data will be moved to hdfs and loaded into hive table.
Finally, you can query the hive table using SELECT * FROM TABLENAME;

Hive loading in partitioned table

I have a log file in HDFS, values are delimited by comma. For example:
2012-10-11 12:00,opened_browser,userid111,deviceid222
Now I want to load this file to Hive table which has columns "timestamp","action" and partitioned by "userid","deviceid". How can I ask Hive to take that last 2 columns in log file as partition for table? All examples e.g. "hive> LOAD DATA INPATH '/user/myname/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15');" require definition of partitions in the script, but I want partitions to set up automatically from HDFS file.
The one solution is to create intermediate non-partitioned table with all that 4 columns, populate it from file and then make an INSERT into first_table PARTITION (userid,deviceid) select from intermediate_table timestamp,action,userid,deviceid; but that is and additional task and we will have 2 very similiar tables.. Or we should create external table as intermediate.
Ning Zhang has a great response on the topic at http://grokbase.com/t/hive/user/114frbfg0y/can-i-use-hive-dynamic-partition-while-loading-data-into-tables.
The quick context is that:
Load data simply copies data, it doesn't read it so it cannot figure out what to partition
Would suggest that you load data into an intermediate table first (or using an external table pointing to all the files) and then letting partition dynamic insert to kick in to load it into a partitioned table
As mentioned in #Denny Lee's answer, we need to involve a staging table(invites_stg)
managed or external and then INSERT from staging table to partitioned table(invites in this case).
Make sure we have these two properties set to:
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
And finally insert to invites,
INSERT OVERWRITE TABLE India PARTITION (STATE) SELECT COL's FROM invites_stg;
Refer this link for help: http://www.edupristine.com/blog/hive-partitions-example
I worked this very same scenario, but instead, what we did is create separate HDFS data files for each partition you need to load.
Since our data is coming from a MapReduce job, we used MultipleOutputs in our Reducer class to multiplex the data into their corresponding partition file. Afterwards, it is just a matter of building the script using the Partition from the HDFS file name.
How about
LOAD DATA INPATH '/path/to/HDFS/dir/file.csv' OVERWRITE INTO TABLE DB.EXAMPLE_TABLE PARTITION (PARTITION_COL_NAME='PARTITION_VALUE');
CREATE TABLE India (
OFFICE_NAME STRING,
OFFICE_STATUS STRING,
PINCODE INT,
TELEPHONE BIGINT,
TALUK STRING,
DISTRICT STRING,
POSTAL_DIVISION STRING,
POSTAL_REGION STRING,
POSTAL_CIRCLE STRING
)
PARTITIONED BY (STATE STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
5. Instruct hive to dynamically load partitions
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;

Sequence File with Block Compression

I need to enable Sequence File with Block Compression data. Below is the table which will be stored as SequenceFile.
create table lip_data_quality
( buyer_id bigint,
total_chkout bigint,
total_errpds bigint
)
partitioned by (dt string)
row format delimited fields terminated by '\t'
stored as sequencefile
location '/apps/hdmi-technology/b_apdpds/lip-data-quality'
;
And in the above table, I am getting data in Compressed Form like this by enabling these commands-
set mapred.output.compress=true;
set mapred.output.compression.type=BLOCK;
set mapred.output.compression.codec=org.apache.hadoop.io.compress.LzoCodec;
So my question is that's all I need to enable BLOCK Compression with Sequence File? Or is there anything else I need to do? I was following this article Hadoop
Any suggestion will be appreciated.
Update:-
I am loading the data in the above table like this by putting everything in a .hql file and running that hql file from the shell command prompt. And changing the partition date everytime while running the below hql file.
set mapred.output.compress=true;
set mapred.output.compression.type=BLOCK;
set mapred.output.compression.codec=org.apache.hadoop.io.compress.LzoCodec;
insert overwrite table lip_data_quality partition (dt='20120712')
SELECT query here which will give the output for the above table.
That should be fine then. You can also verify it by looking at the files on HDFS. There should be a directory in HDFS named /user/hive/warehouse/lip_data_quality/dt=20120712 after your load. If you run
hadoop fs -cat
on one of the files in that folder you should be able to see the header of the file which will give you basic info on the file.
Set the below properties before submitting job.
setProperty(job, "mapred.output.compress", "true");
setProperty(job,"mapred.output.compression.type", "BLOCK");
setProperty(job,"mapred.output.compression.codec","org.apache.hadoop.io.compress.DefaultCodec");
Using DefaultCodec, one can use org.apache.hadoop.io.compress.LzoCodec;

Resources