How to handle new line characters in hive? - hadoop

I am exporting table from Teradata to Hive.. The table in the teradata Has a address field which has New line characters(\n).. initially I am exporting the table to mount filesystem path from Teradata and then I am loading the table into hive... Record counts are mismatching between teradata table and hive table, Since new line characters are presented in hive.
NOTE: I don't want to handle this through sqoop to bring the data I want to handle the new line characters while loading Into hive from local path.

I got this to work by creating an external table with the following options:
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001'
ESCAPED BY '\\'
STORED AS TEXTFILE;
Then I created a partition to the directory that contains the data files. (my table uses partitions)
i.e.
ALTER TABLE STG_HOLD_CR_LINE_FEED ADD PARTITION (part_key='part_week53') LOCATION '/ifs/test/schema.table/staging/';
NOTE: Be sure that when creating your data file you use '\' as the escape character.

Load data command in Hive only copies the data directly into the hdfs table location.
The only reason Hive would split a new line is if you only defined the table stored as TEXT, which by default uses new lines as record separators, not field separators.
To redefine the table you need something like
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',' ESCAPED BY 'x'
LINES TERMINATED BY 'y'
Where, x and y are, hopefully, escape characters around fields containing new lines, and record delimiters, respectively

Related

How to add file to Hive

I have a file where all the column delimiters at Notepad++ are shown as EOT, SOH, ETX, ACK, BEL, BS, ENQ
I know the schema of the table but I am totally new at these technologies and I cannot load the file to the table. Can I do it through UI like CSV file, and if yes with what delimiter?
Thank you in advance for your help.
It is pretty easy as you have mentioned the file is "," saparated.
lets create a simple table with 1 column.
CREATE TABLE test1(col1 STRING);
Row format delimited
Fields terminated by ',';
Please note statement Fields terminated by ',' we have given fields are saparated by "," if it columns are saparated by tab we can change it to "\t"
once the table is create we can load the file using the below commands.
If File is on local file system
LOAD DATA LOCAL INPATH '<complete_local_file_path>' INTO table test1;
If File is in HDFS
LOAD DATA INPATH '<complete_HDFS_file_path>' INTO table test1;
Hive is just an abstraction layer over HDFS, so you would add the file to HDFS in some folder, then build an EXTERNAL TABLE;over top of it
CREATE EXTERNAL TABLE name(...)
STORED AS TEXT
LINE FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION '/path/to/folder/'
;
Can I do it through UI like CSV file
If you install HUE, then you could

Table count is more than File record count in Hive

I'm using the SQL server exported file as the input of my hive table (having 40 columns). There are around 6 million rows in the data file, but when I load that file in the hive table, I find the record count more than row count in file. The table has 15 records more than that of the input text file.
I suspect the presence of new line characters \n in the data, but due to the huge volume of data I'm unable to manually check and remove these characters from the data file.
Is there any way by which I can manage my table count exactly equal to that of file count? Can I make my load query to consider those new line characters as data instead of record delimiter? or is there any other issue?
If you are sqooping input to hdfs/hive then you may use --hive-drop-import-delims or --hive-delims-replacement options of sqoop.
Hive will have problems using Sqoop-imported data if your database’s
rows contain string fields that have Hive’s default row delimiters (\n
and \r characters) or column delimiters (\01 characters) present in
them.
You can use the --hive-drop-import-delims option to drop those
characters on import to give Hive-compatible text data.
Alternatively, you can use the --hive-delims-replacement option to replace > those characters with a user-defined string on import to give
Hive-compatible text data.
These options should only be used if you
use Hive’s default delimiters and should not be used if different
delimiters are specified.
Sqoop User Guide
Alternatively, if you are copying files onto hdfs using some other method, then just run a replace script/command over the files.
It was as simple as to run a simple unix command and clean the source data.
sed -i 's/\r//g'
After applying this command on the dataset to remove carraige returns I was able to load the hive table with expected record count.

Delete row in hive external table

I loaded text file into hive external table. That text file has a delimiter of / to differentiate column. Also additionally some column has new line character in one column. Because of that there is mismatch in the data stored in external table. In my case the unique key is row_id which contains values like 1_234 . rowid is numeric. But because of new line character in the text file, some rows has text in row_id.
Is there any way to delete those rows in hive or how can I remove the new line character in text file in hdfs?
You will have to write a hadoop (streaming is an option) job to clean your data before loading into Hive.

How do I ignore brackets when loading exteral table in HIVE

I'm trying to load an extract of a pig script as an external table in HIVE. Pig enclosed each row between brackets () (tuples?) like this:
(1,2,3,a)
(2,4,5,b)
(4,2,6,c)
and I can't find a way to tell HIVE to ignore those brackets which results in null values for the first column as it is actually an integer.
Any thoughts on how to proceed?
I know I can use a FLATTEN command in PIG but I would also like to learn how to deal with these files directly from HIVE.
There is no way to do this in one step. You'd have to have another step, be it the use of flatten in Pig or an extra Hive INSERT INTO.
In Hive you could use split(string field, string pattern) several times to read from your external table and create the columns you want and then load that into a new table. However I'd always lean towards having Pig output into the format you want, unless something else is reading from this file that expects the data in that format. It will save an expensive re-read of all your data.
As Ben said there is no way to do in one step.. but you can do it by creating one more temp table in hive.
Not sure if I am making it more complicated with one more table.. but it worked for me.
create external table A_TEMP (first string,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Adata';
Place your data under 'Adata' folder
create external table A (first int,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Afinaldata';
Now lets insert data
insert into table A
select cast(substr(first, 2, length(first) - 2) as int),second,third,substr(fourth, 1,length(fourth) - 1 ) from A_TEMP;
I know type casting will hit performance.. but for the given scenario this is the best I could come up with.

handling newline character in hive

I have created a table in hive as
Create table(id int, Description String)
My data looks something as follows :
1|This will return corrupt data since there is a ',' in the first string.
some text
Change the data
2|There is prob in reading data
sometext
After the data is loaded into hive since the default line terminator is \n, the description column cannot be read by hive, Hence it displays a NULL value. Can anyone suggest how to handle newline before loading into hive.
I know this question is old, but you have a couple of options. You can't control this with FIELDS TERMINATED BY, because that only controls what terminates the fields, not the records. Records in Hive are hard-coded to be terminated by the newline character (even though there is a LINES TERMINATED BY clause, it is not implemented).
Write a custom InputFormat that uses a RecordReader that
understands non-newline delimited records. Look at the code for
LineReader/LineRecordReader and TextInputFormat.
Use a format
other than text/ASCII, like Parquet. I would recommend this
regardless, as text is probably the worst format you can store data
in anyway.
try adding the below property in hive-site.xml or you can just try for temporary hive session level.
hive.query.result.fileformat=SequenceFile
By default hive takes in NEWLINE ('\N') as delimiter .
You can change the delimiter using:
ROW FORMAT DELIMITED FIELDS TERMINATED BY ",";

Resources