handling newline character in hive - hadoop

I have created a table in hive as
Create table(id int, Description String)
My data looks something as follows :
1|This will return corrupt data since there is a ',' in the first string.
some text
Change the data
2|There is prob in reading data
sometext
After the data is loaded into hive since the default line terminator is \n, the description column cannot be read by hive, Hence it displays a NULL value. Can anyone suggest how to handle newline before loading into hive.

I know this question is old, but you have a couple of options. You can't control this with FIELDS TERMINATED BY, because that only controls what terminates the fields, not the records. Records in Hive are hard-coded to be terminated by the newline character (even though there is a LINES TERMINATED BY clause, it is not implemented).
Write a custom InputFormat that uses a RecordReader that
understands non-newline delimited records. Look at the code for
LineReader/LineRecordReader and TextInputFormat.
Use a format
other than text/ASCII, like Parquet. I would recommend this
regardless, as text is probably the worst format you can store data
in anyway.

try adding the below property in hive-site.xml or you can just try for temporary hive session level.
hive.query.result.fileformat=SequenceFile

By default hive takes in NEWLINE ('\N') as delimiter .
You can change the delimiter using:
ROW FORMAT DELIMITED FIELDS TERMINATED BY ",";

Related

Loaded the data to Hive with wrong delimiter

I have 10TB of data in Local, I have loaded the data using single delimiter (~), but the some fields in the data has that delimiter in their words, so the data is appended to next column, How Can I alter that table to make it correct ?

How to handle new line characters in hive?

I am exporting table from Teradata to Hive.. The table in the teradata Has a address field which has New line characters(\n).. initially I am exporting the table to mount filesystem path from Teradata and then I am loading the table into hive... Record counts are mismatching between teradata table and hive table, Since new line characters are presented in hive.
NOTE: I don't want to handle this through sqoop to bring the data I want to handle the new line characters while loading Into hive from local path.
I got this to work by creating an external table with the following options:
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001'
ESCAPED BY '\\'
STORED AS TEXTFILE;
Then I created a partition to the directory that contains the data files. (my table uses partitions)
i.e.
ALTER TABLE STG_HOLD_CR_LINE_FEED ADD PARTITION (part_key='part_week53') LOCATION '/ifs/test/schema.table/staging/';
NOTE: Be sure that when creating your data file you use '\' as the escape character.
Load data command in Hive only copies the data directly into the hdfs table location.
The only reason Hive would split a new line is if you only defined the table stored as TEXT, which by default uses new lines as record separators, not field separators.
To redefine the table you need something like
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',' ESCAPED BY 'x'
LINES TERMINATED BY 'y'
Where, x and y are, hopefully, escape characters around fields containing new lines, and record delimiters, respectively

Table count is more than File record count in Hive

I'm using the SQL server exported file as the input of my hive table (having 40 columns). There are around 6 million rows in the data file, but when I load that file in the hive table, I find the record count more than row count in file. The table has 15 records more than that of the input text file.
I suspect the presence of new line characters \n in the data, but due to the huge volume of data I'm unable to manually check and remove these characters from the data file.
Is there any way by which I can manage my table count exactly equal to that of file count? Can I make my load query to consider those new line characters as data instead of record delimiter? or is there any other issue?
If you are sqooping input to hdfs/hive then you may use --hive-drop-import-delims or --hive-delims-replacement options of sqoop.
Hive will have problems using Sqoop-imported data if your database’s
rows contain string fields that have Hive’s default row delimiters (\n
and \r characters) or column delimiters (\01 characters) present in
them.
You can use the --hive-drop-import-delims option to drop those
characters on import to give Hive-compatible text data.
Alternatively, you can use the --hive-delims-replacement option to replace > those characters with a user-defined string on import to give
Hive-compatible text data.
These options should only be used if you
use Hive’s default delimiters and should not be used if different
delimiters are specified.
Sqoop User Guide
Alternatively, if you are copying files onto hdfs using some other method, then just run a replace script/command over the files.
It was as simple as to run a simple unix command and clean the source data.
sed -i 's/\r//g'
After applying this command on the dataset to remove carraige returns I was able to load the hive table with expected record count.

How do I ignore brackets when loading exteral table in HIVE

I'm trying to load an extract of a pig script as an external table in HIVE. Pig enclosed each row between brackets () (tuples?) like this:
(1,2,3,a)
(2,4,5,b)
(4,2,6,c)
and I can't find a way to tell HIVE to ignore those brackets which results in null values for the first column as it is actually an integer.
Any thoughts on how to proceed?
I know I can use a FLATTEN command in PIG but I would also like to learn how to deal with these files directly from HIVE.
There is no way to do this in one step. You'd have to have another step, be it the use of flatten in Pig or an extra Hive INSERT INTO.
In Hive you could use split(string field, string pattern) several times to read from your external table and create the columns you want and then load that into a new table. However I'd always lean towards having Pig output into the format you want, unless something else is reading from this file that expects the data in that format. It will save an expensive re-read of all your data.
As Ben said there is no way to do in one step.. but you can do it by creating one more temp table in hive.
Not sure if I am making it more complicated with one more table.. but it worked for me.
create external table A_TEMP (first string,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Adata';
Place your data under 'Adata' folder
create external table A (first int,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Afinaldata';
Now lets insert data
insert into table A
select cast(substr(first, 2, length(first) - 2) as int),second,third,substr(fourth, 1,length(fourth) - 1 ) from A_TEMP;
I know type casting will hit performance.. but for the given scenario this is the best I could come up with.

Simple Hive query is empty

I have a csv log file. After loading it into Hive using this sentence:
CREATE EXTERNAL TABLE iprange(id STRING, ip STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\,' STORED AS TEXTFILE LOCATION '/user/hadoop/expandediprange/';
I want to perfom a simple query like:
select * from iprange where ip="0.0.0.2";
But I get an empty result.
I'm running Hive on HDFS, should I use HBase?
My conclusion is that it's got something to do with the table size. Log file is 160 MB, and the generated table in Hive has 8 million rows. If I try to create myself a smaller file and load it to Hive it will work.
Any idea of what is wrong?
Edit: I forgot to say that it's running on Amazon Elastic MapReduce using a small instance.
I found the problem. It was not a Hive issue really. I'm using the output of a Hadoop job as input, and in that job I was writing the output in the key, leaving the value as an empty string:
context.write(new Text(id + "," + ip), new Text(""));
The problem is that Hadoop inserts a tab character by default between the key and the value, and as field is a string it took the tab as well, so I had a trailing tab in every line. I discovered it using Pig as it embraces the output with ().
The solution for me is to set the separator to another character, as I have only two fields I write one in the key and the other one in the value, and set the separator to ",":
conf.set("mapred.textoutputformat.separator", ",");
Maybe its possible to trim these things in Hive.

Resources