load text to Orc file - hadoop

How to load text file into Hive orc external table?
create table MyDB.TEST (
Col1 String,
Col2 String,
Col3 String,
Col4 String)
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
I have already created above table as Orc. but while fetching data from table it show below error
Failed with exception
java.io.IOException:org.apache.orc.FileFormatException: Malformed ORC file hdfs://localhost:9000/Ext/sqooporc/part-m-00000. Invalid
postscript.

There are multiple steps to that. Follows the details.
Create a hive table which is able to read from the plain text file. Assuming that your file is a comma delimited file and your file is on HDFS on a location called /user/data/file1.txt, follows will be the syntax.
create table MyDB.TEST (
Col1 String,
Col2 String,
Col3 String,
Col4 String
)
row format delimited
fields terminated by ','
location '/user/data/file1.txt';
Now you have a schema which is in sync with the format of the data you posses.
Create another table with ORC schema
Now you need to create the ORC table as you were creating earlier. Here is a simpler syntax for creating that table.
create table MyDB.TEST_ORC (
Col1 String,
Col2 String,
Col3 String,
Col4 String)
STORED AS ORC;
Your TEST_ORC table is an empty table now. You can populate this table using the data from TEST table using the following command.
INSERT OVERWRITE TABLE TEST_ORC SELECT * FROM TEST;
The aforementioned statement will select all the records from TEST table and will try to write those records to TEST_ORC table. Since TEST_ORC is an ORC table, the data will be converted to ORC format on the fly when written into the table.
You can even check the storage location of TEST_ORC table for ORC files.
Now your data is in ORC format and your table TEST_ORC has the required schema to parse it. You may drop your TEST table now, if not needed.
Hope that helps!

Related

Insert overwrite to Hive table saves less records than actual record number

I have a partitioned table tab and I want to create some tmp table test1 from it. Here is how I created the tmp table:
CREATE TABLE IF NOT EXISTS test1
(
COL1 string,
COL2 string,
COL3 string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS TEXTFILE;
Write to this table with:
INSERT OVERWRITE TABLE test1
SELECT TAB.COL1 as COL1,
TAB.COL2 as COL2,
TAB.COL3 as COL3
FROM TAB
WHERE PT='2019-05-01';
Then I count records in test1, it has 94493486 records, while the following SQL returns count 149248486:
SELECT COUNT(*) FROM
(SELECT TAB.COL1 as COL1,
TAB.COL2 as COL2,
TAB.COL3 as COL3
FROM TAB
WHERE PT='2019-05-01') AS TMP;
Also, when I save the selected partition(PT is the partition column) to HDFS, the record count is correct:
INSERT OVERWRITE directory '/user/me/wtfhive' row format delimited fields terminated by '|'
SELECT TAB.COL1 as COL1,
TAB.COL2 as COL2,
TAB.COL3 as COL3
FROM TAB
WHERE PT='2019-05-01';
My Hive version is 3.1.0 coming with Ambari 2.7.1.0. Anyone have any idea what may cause this issue? Thanks.
=================== UPDATE =================
I find something might related to this issue.
The table tab uses ORC as storage format. Its data is imported from ORC data file of another table, in another Hive cluster, with following script:
LOAD DATA INPATH '/data/hive_dw/db/tablename/pt=2019-04-16' INTO TABLE tab PARTITION(pt='2019-04-16');
As the 2 table shares same format, the loading procedure is basically just moves data file from HDFS source directory to Hive directory.
In following procedure, I can load without any issue:
export data from ORC table tab to HDFS text file
load from the text file to a Hive temp table
load data back to tab from the temp table
now I can select/export from tab to other tables without any record missing
I suspect the issue is in ORC format. I just don't understand why it can export to HDFS text file without any problem, but export to another table(no matter what storage format the other table uses) will loss data.
use this below serde properties :
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = ",",
"quoteChar" = "\""
)
STORED as TEXTFILE

Can we use TEXT FILE format for Hive Table with Snappy compression?

I have an hive external table in the HDFS and i am trying to create a hive managed table above it.i am using textfile format with snappy compression but i want to know how it helps the table.
CREATE TABLE standard_cd
(
last_update_dttm TIMESTAMP,
last_operation_type CHAR (1) ,
source_commit_dttm TIMESTAMP,
transaction_dttm TIMESTAMP ,
transaction_type CHAR (1)
)
PARTITIONED BY (process_dt DATE)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS TEXTFILE
TBLPROPERTIES ("orc.compress" = "SNAPPY");
Let me know if any issues in creating in this format.
As such their is no issue while creating.
but difference in properties:
Table created and stored as TEXTFILE:
Table created and stored as ORC:
although the size of both tables were same after loading some data.
also check documentation about ORC file format

hive: external partitioned table without location

Is it possible to create external partitioned table without location? I want to add all the locations later, together with partitions.
i tried:
CREATE EXTERNAL TABLE IF NOT EXISTS a.b
(line STRING)
COMMENT 'abc'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\n'
STORED AS TEXTFILE
PARTITIONED BY day;
but i got ParseException: missing EOF at 'PARTITIONED' near 'TEXTFILE'
I don't think so, as said in alter location.
But anyway, i think your query as some errors and the correct script would be :
CREATE EXTERNAL TABLE IF NOT EXISTS a.b
(line STRING)
COMMENT 'abc'
PARTITIONED BY (day String)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\n'
STORED AS TEXTFILE
;
I think the issue is that you have not specified data type for your partition column "day". And you can create a HIVE external table without location and can use ALTER table options later to change the location.

Impala minimum DDL

I know that we can create an Impala table like
CREATE EXTERNAL TABLE SCHEMA.TableName LIKE PARQUET
'/rootDir/SecondLevelDir/RawFileThatKnowsDataTypes.parquet'
But I am not sure if Impala can create a table from a file (preferably a text file) that has no known formatting. So in other words if I just dump a random file into hadoop with a put command, can I wrap an Impala DDL around it and have a table created. Can anyone tell me?
If you file is newline separated I believe it should work if you provide the column delimiter with the ROW FORMAT clause, since textfile is the default format. Just get rid of your LIKE clause, and choose names and datatypes for your columns something like this:
CREATE EXTERNAL TABLE SCHEMA.TableName (col1 STRING, col2 INT, col3 FLOAT)
'/rootDir/SecondLevelDir/RawFile'
row format delimited fields terminated by ",";

How to use Parquet in my current architecture?

My current system is architected in this way.
Log parser will parse raw log at every 5 minutes with format TSV and output to HDFS. I created Hive table out of the TSV file from HDFS.
From some benchmark, I found that Parquet can save up to 30-40% of the space usage. I also found that I can create Hive table out of Parquet file starting Hive 0.13. I would like know if I can convert TSV to Parquet file.
Any suggestion is appreciated.
Yes, in Hive you can easily convert from one format to another by inserting from one table to the other.
For example, if you have a TSV table defined as:
CREATE TABLE data_tsv
(col1 STRING, col2 INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';
And a Parquet table defined as:
CREATE TABLE data_parquet
(col1 STRING, col2 INT)
STORED AS PARQUET;
You can convert the data with:
INSERT OVERWRITE TABLE data_parquet SELECT * FROM data_tsv;
Or you can skip the Parquet table DDL by:
CREATE TABLE data_parquet STORED AS PARQUET AS SELECT * FROM data_tsv;

Resources