I am trying to load a text file with hexadecimal 19 as a delimiter in Hive.
It is not loading table properly. All the columns are loaded in the first column of the table.
I have tried using different values \u0019 or ^Y as fields terminated by.
Any thoughts on this would be really appreciated.
Related
I am exporting table from Teradata to Hive.. The table in the teradata Has a address field which has New line characters(\n).. initially I am exporting the table to mount filesystem path from Teradata and then I am loading the table into hive... Record counts are mismatching between teradata table and hive table, Since new line characters are presented in hive.
NOTE: I don't want to handle this through sqoop to bring the data I want to handle the new line characters while loading Into hive from local path.
I got this to work by creating an external table with the following options:
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001'
ESCAPED BY '\\'
STORED AS TEXTFILE;
Then I created a partition to the directory that contains the data files. (my table uses partitions)
i.e.
ALTER TABLE STG_HOLD_CR_LINE_FEED ADD PARTITION (part_key='part_week53') LOCATION '/ifs/test/schema.table/staging/';
NOTE: Be sure that when creating your data file you use '\' as the escape character.
Load data command in Hive only copies the data directly into the hdfs table location.
The only reason Hive would split a new line is if you only defined the table stored as TEXT, which by default uses new lines as record separators, not field separators.
To redefine the table you need something like
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',' ESCAPED BY 'x'
LINES TERMINATED BY 'y'
Where, x and y are, hopefully, escape characters around fields containing new lines, and record delimiters, respectively
I'm using the SQL server exported file as the input of my hive table (having 40 columns). There are around 6 million rows in the data file, but when I load that file in the hive table, I find the record count more than row count in file. The table has 15 records more than that of the input text file.
I suspect the presence of new line characters \n in the data, but due to the huge volume of data I'm unable to manually check and remove these characters from the data file.
Is there any way by which I can manage my table count exactly equal to that of file count? Can I make my load query to consider those new line characters as data instead of record delimiter? or is there any other issue?
If you are sqooping input to hdfs/hive then you may use --hive-drop-import-delims or --hive-delims-replacement options of sqoop.
Hive will have problems using Sqoop-imported data if your database’s
rows contain string fields that have Hive’s default row delimiters (\n
and \r characters) or column delimiters (\01 characters) present in
them.
You can use the --hive-drop-import-delims option to drop those
characters on import to give Hive-compatible text data.
Alternatively, you can use the --hive-delims-replacement option to replace > those characters with a user-defined string on import to give
Hive-compatible text data.
These options should only be used if you
use Hive’s default delimiters and should not be used if different
delimiters are specified.
Sqoop User Guide
Alternatively, if you are copying files onto hdfs using some other method, then just run a replace script/command over the files.
It was as simple as to run a simple unix command and clean the source data.
sed -i 's/\r//g'
After applying this command on the dataset to remove carraige returns I was able to load the hive table with expected record count.
I loaded text file into hive external table. That text file has a delimiter of / to differentiate column. Also additionally some column has new line character in one column. Because of that there is mismatch in the data stored in external table. In my case the unique key is row_id which contains values like 1_234 . rowid is numeric. But because of new line character in the text file, some rows has text in row_id.
Is there any way to delete those rows in hive or how can I remove the new line character in text file in hdfs?
You will have to write a hadoop (streaming is an option) job to clean your data before loading into Hive.
I'm trying to load an extract of a pig script as an external table in HIVE. Pig enclosed each row between brackets () (tuples?) like this:
(1,2,3,a)
(2,4,5,b)
(4,2,6,c)
and I can't find a way to tell HIVE to ignore those brackets which results in null values for the first column as it is actually an integer.
Any thoughts on how to proceed?
I know I can use a FLATTEN command in PIG but I would also like to learn how to deal with these files directly from HIVE.
There is no way to do this in one step. You'd have to have another step, be it the use of flatten in Pig or an extra Hive INSERT INTO.
In Hive you could use split(string field, string pattern) several times to read from your external table and create the columns you want and then load that into a new table. However I'd always lean towards having Pig output into the format you want, unless something else is reading from this file that expects the data in that format. It will save an expensive re-read of all your data.
As Ben said there is no way to do in one step.. but you can do it by creating one more temp table in hive.
Not sure if I am making it more complicated with one more table.. but it worked for me.
create external table A_TEMP (first string,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Adata';
Place your data under 'Adata' folder
create external table A (first int,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Afinaldata';
Now lets insert data
insert into table A
select cast(substr(first, 2, length(first) - 2) as int),second,third,substr(fourth, 1,length(fourth) - 1 ) from A_TEMP;
I know type casting will hit performance.. but for the given scenario this is the best I could come up with.
I have created a table in hive as
Create table(id int, Description String)
My data looks something as follows :
1|This will return corrupt data since there is a ',' in the first string.
some text
Change the data
2|There is prob in reading data
sometext
After the data is loaded into hive since the default line terminator is \n, the description column cannot be read by hive, Hence it displays a NULL value. Can anyone suggest how to handle newline before loading into hive.
I know this question is old, but you have a couple of options. You can't control this with FIELDS TERMINATED BY, because that only controls what terminates the fields, not the records. Records in Hive are hard-coded to be terminated by the newline character (even though there is a LINES TERMINATED BY clause, it is not implemented).
Write a custom InputFormat that uses a RecordReader that
understands non-newline delimited records. Look at the code for
LineReader/LineRecordReader and TextInputFormat.
Use a format
other than text/ASCII, like Parquet. I would recommend this
regardless, as text is probably the worst format you can store data
in anyway.
try adding the below property in hive-site.xml or you can just try for temporary hive session level.
hive.query.result.fileformat=SequenceFile
By default hive takes in NEWLINE ('\N') as delimiter .
You can change the delimiter using:
ROW FORMAT DELIMITED FIELDS TERMINATED BY ",";