SQLLDR with Sequence - oracle

I am using sqlldr for uploading data to oracle table from a csv.
For one column i need to use SEQUENCE.
My control file looks like this.
LOAD DATA
INFILE "D:\WORKER\ADS4014_GeneralJobWorkers_1426054269727.csv"
BADFILE dataFile.bad
APPEND INTO TABLE GJobWorkers
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(JobId,UserId,EmpId,ChkInTme,ChkOutTme,tDate,TotalHrs,JobCode,Modified_Time,GJ_SEQ "jobs_GJ.nextval")
But i get error in the log file :
Error on table GJobWorkers, column GJ_SEQ.
Column not found before end of logical record (use TRAILING NULLCOLS)
Can anyone help me doing this?
Thanks in advance.

Related

How to add file to Hive

I have a file where all the column delimiters at Notepad++ are shown as EOT, SOH, ETX, ACK, BEL, BS, ENQ
I know the schema of the table but I am totally new at these technologies and I cannot load the file to the table. Can I do it through UI like CSV file, and if yes with what delimiter?
Thank you in advance for your help.
It is pretty easy as you have mentioned the file is "," saparated.
lets create a simple table with 1 column.
CREATE TABLE test1(col1 STRING);
Row format delimited
Fields terminated by ',';
Please note statement Fields terminated by ',' we have given fields are saparated by "," if it columns are saparated by tab we can change it to "\t"
once the table is create we can load the file using the below commands.
If File is on local file system
LOAD DATA LOCAL INPATH '<complete_local_file_path>' INTO table test1;
If File is in HDFS
LOAD DATA INPATH '<complete_HDFS_file_path>' INTO table test1;
Hive is just an abstraction layer over HDFS, so you would add the file to HDFS in some folder, then build an EXTERNAL TABLE;over top of it
CREATE EXTERNAL TABLE name(...)
STORED AS TEXT
LINE FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION '/path/to/folder/'
;
Can I do it through UI like CSV file
If you install HUE, then you could

How to handle new line characters in hive?

I am exporting table from Teradata to Hive.. The table in the teradata Has a address field which has New line characters(\n).. initially I am exporting the table to mount filesystem path from Teradata and then I am loading the table into hive... Record counts are mismatching between teradata table and hive table, Since new line characters are presented in hive.
NOTE: I don't want to handle this through sqoop to bring the data I want to handle the new line characters while loading Into hive from local path.
I got this to work by creating an external table with the following options:
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001'
ESCAPED BY '\\'
STORED AS TEXTFILE;
Then I created a partition to the directory that contains the data files. (my table uses partitions)
i.e.
ALTER TABLE STG_HOLD_CR_LINE_FEED ADD PARTITION (part_key='part_week53') LOCATION '/ifs/test/schema.table/staging/';
NOTE: Be sure that when creating your data file you use '\' as the escape character.
Load data command in Hive only copies the data directly into the hdfs table location.
The only reason Hive would split a new line is if you only defined the table stored as TEXT, which by default uses new lines as record separators, not field separators.
To redefine the table you need something like
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ',' ESCAPED BY 'x'
LINES TERMINATED BY 'y'
Where, x and y are, hopefully, escape characters around fields containing new lines, and record delimiters, respectively

Hexadecimal as Delimiter in a textfile

I am trying to load a text file with hexadecimal 19 as a delimiter in Hive.
It is not loading table properly. All the columns are loaded in the first column of the table.
I have tried using different values \u0019 or ^Y as fields terminated by.
Any thoughts on this would be really appreciated.

How do I ignore brackets when loading exteral table in HIVE

I'm trying to load an extract of a pig script as an external table in HIVE. Pig enclosed each row between brackets () (tuples?) like this:
(1,2,3,a)
(2,4,5,b)
(4,2,6,c)
and I can't find a way to tell HIVE to ignore those brackets which results in null values for the first column as it is actually an integer.
Any thoughts on how to proceed?
I know I can use a FLATTEN command in PIG but I would also like to learn how to deal with these files directly from HIVE.
There is no way to do this in one step. You'd have to have another step, be it the use of flatten in Pig or an extra Hive INSERT INTO.
In Hive you could use split(string field, string pattern) several times to read from your external table and create the columns you want and then load that into a new table. However I'd always lean towards having Pig output into the format you want, unless something else is reading from this file that expects the data in that format. It will save an expensive re-read of all your data.
As Ben said there is no way to do in one step.. but you can do it by creating one more temp table in hive.
Not sure if I am making it more complicated with one more table.. but it worked for me.
create external table A_TEMP (first string,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Adata';
Place your data under 'Adata' folder
create external table A (first int,second int,third int,fourth string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION '/user/hdfs/Afinaldata';
Now lets insert data
insert into table A
select cast(substr(first, 2, length(first) - 2) as int),second,third,substr(fourth, 1,length(fourth) - 1 ) from A_TEMP;
I know type casting will hit performance.. but for the given scenario this is the best I could come up with.

FAILED: ParseException: cannot recognize input near 'exchange' 'string' ',' in column specification

I am using latest AWS Hive version 0.13.0.
FAILED: ParseException: cannot recognize input near 'exchange' 'string' ',' in column specification
I am getting the above error when I run the below(create table) query.
CREATE EXTERNAL TABLE test (
foo string,
exchange string,
bar string) ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/home/hadoop/test/';
If I rename the exchange like 'xch' it creates table successfully. Any reason?
You are getting an error because exchange is a keyword used to move the data in a partition from a table to another table that has the same schema but does not already have that partition for details view Hive Language Manual and HIVE-4095.
Try it like this after create statement
LOAD DATA LOCAL INPATH '/home/cloudera/Amit/xyz.csv' OVERWRITE INTO TABLE tabele_name;

Resources