I'm trying to load the data from an xml file to a table. I get the below errors, please help me out.
Table:
CREATE TABLE TEST_XML
(FILL CHAR(30)
XMLDATA CLOB);
Here is my control file
LOAD DATA
INFILE *
TRUNCATE INTO TABLE TEST_XML XMLType(XMLDATA)
FIELDS ( FILL FILLER CHAR(100), XMLDATA LOBFILE(CONSTANT test_file.xml) TERMINATED BY EOF )
BEGINDATA 0
I get the below error:
Table TEST_XML, loaded from every logical record. Insert option in
effect for this table: TRUNCATE
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- --------------------- FILL FIRST 100 CHARACTER (FILLER FIELD) XMLDATA
DERIVED * EOF CHARACTER
Static LOBFILE. Filename is test_file.xml
Record 1: Rejected - Error on table TEST_XML. ORA-01008: not all
variables bound
For me it is invalid syntax in control file. Oder of key word is relevant. Also like after begindata
LOAD DATA
INFILE *
INTO TABLE TEST_XML
truncate
FIELDS
( FILL FILLER CHAR(100)
,XMLDATA LOBFILE(CONSTANT test_file.xml) TERMINATED BY EOF )
BEGINDATA
0
I'm entering values from a CSV file to an Oracle table using a SQL*Loader script. In this table there are fields with NOT NULL constraints. In my CSV file the corresponding field is "" and I would like put a blank string into the Oracle table when that happens.
This is my control file:
LOAD DATA
infile 'F:\tar.csv'
REPLACE
INTO TABLE tar
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(IDTAR ,
DATABACKUP DATE "YYYY-MM-DD",
PAESE ,
R_ELEM NULLIF (R_ELEM=BLANKS))
and this is the error in the log file:
ORA-01400: cannot insert NULL into ("MY_SCHEMA"."TAR"."PAESE")
How can I avoid the error by supplying a different value?
You can apply an SQL operator, such as NVL(:PAESE, 'XXX'). Notice the colon before the reference to the field name. In situ:
LOAD DATA
infile 'gian.csv'
REPLACE
INTO TABLE tar
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
IDTAR,
DATABACKUP DATE "YYYY-MM-DD",
PAESE "NVL(:PAESE, 'XXX')",
R_ELEM NULLIF (R_ELEM=BLANKS)
)
With a dummy table:
create table tar (
idtar number,
databackup date,
paese varchar2(10) not null,
r_elem varchar2(10)
);
and CSV, where the 3rd and 4th lines have trailing spaces for the nullif() clause:
1,2017-08-01,A,B
2,2017-08-02,C,
3,2017-08-03,,
4,2017-08-04,"",
then running with that control file gets:
SQL*Loader: Release 11.2.0.4.0 - Production on Fri Aug 4 19:39:23 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Commit point reached - logical record count 4
and the log says:
...
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
IDTAR FIRST * , O(") CHARACTER
DATABACKUP NEXT * , O(") DATE YYYY-MM-DD
PAESE NEXT * , O(") CHARACTER
SQL string for column : "NVL(:PAESE, 'XXX')"
R_ELEM NEXT * , O(") CHARACTER
NULL if R_ELEM = BLANKS
Table TAR:
4 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
...
Querying the table shows all four rows were loaded:
set null "<null>"
select * from tar;
IDTAR DATABACKU PAESE R_ELEM
---------- --------- ---------- ----------
1 01-AUG-17 A B
2 02-AUG-17 C <null>
3 03-AUG-17 XXX <null>
4 04-AUG-17 XXX <null>
Obviously replace 'XXX' with the actual default string you want to use. You said 'a blank string', so you could use "NVL(:PAESE, ' ')" to insert a single space character for instance. You can't use an empty string though, as that is the same as null as far as Oracle is concerned.
I have an external table which reads from a CSV file and is failing on certain rows.
External table definition:
E_ID NUMBER
A_IND VARCHAR2 (3 Byte)
B_IND VARCHAR2 (3 Byte)
E_DATE DATE
E_AMT NUMBER
F_DATE DATE
D_E_DATE DATE
I see the following info from a log file generated when I select * from the external table.
KUP-05004: Warning: Intra source concurrency disabled because parallel select was not requested.
Field Definitions for table EXTERNAL_TABLE_XTL
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
E_ID CHAR (255)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
A_IND CHAR (255)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
B_IND CHAR (255)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
E_DATE CHAR (10)
Date datatype DATE, date mask MM/DD/YYYY
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
E_AMT CHAR (255)
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
F_DATE CHAR (10)
Date datatype DATE, date mask MM/DD/YYYY
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
D_E_DATE CHAR (10)
Date datatype DATE, date mask MM/DD/YYYY
Terminated by ","
Enclosed by """ and """
Trim whitespace same as SQL Loader
KUP-04021: field formatting error for field D_E_DATE
KUP-04026: field too long for datatype
KUP-04101: record 56 rejected in file /home/TEST.csv
KUP-04021: field formatting error for field D_E_DATE
KUP-04026: field too long for datatype
KUP-04101: record 61 rejected in file /home/TEST.csv
KUP-04021: field formatting error for field D_E_DATE
KUP-04026: field too long for datatype
KUP-04101: record 70 rejected in file /home/TEST.csv
The file was transferred to the server via FileZilla. From reading other posts I thought maybe it was because the file was transferred in binary mode (it was originally on Auto setting) and maybe some non-printed characters have came in. So I tried to transfer using ASCII setting but that did not work. Then I tried to delete one of the lines that caused an error and retype it in manually. That did not work either.
Failed sample data:
5560000,N,Y,,24950,10/12/2011,10/27/2011
5550001,Y,Y,11/26/2013,73813,11/18/2013,11/29/2013
5560002,Y,Y,11/6/2015,22041.28,11/6/2015,11/18/2015
5560003,Y,Y,10/10/2012,2768.66,10/10/2012,10/24/2012
5560004,N,Y,,29750,9/30/2013,10/15/2013
5560005,Y,Y,10/8/2015,76474.84,10/8/2015,10/21/2015
5560006,N,Y,,63879.28,11/16/2011,11/30/2011
5560007,N,Y,,100000,11/14/2013,11/21/2013
Successful sample data:
5560008,Y,N,11/1/2010,,,
5550009,Y,N,,,,
5550010,N,N,,,,
5550011,N,N,,,,
5560012,Y,Y,2/12/2016,50000,2/12/2016,2/23/2016
5560013,Y,N,7/22/2011,,,
My first assumption is for some reason double digit months are not being accepted for the field D_E_DATE. Please note this is successful in the dev environment but not production and both are the same database version.
The following is working fine for me.
Table Definition:
CREATE TABLE my_data (
E_ID NUMBER,
A_IND VARCHAR2 (3 Byte),
B_IND VARCHAR2 (3 Byte),
E_DATE DATE,
E_AMT NUMBER,
F_DATE DATE,
D_E_DATE DATE
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY MY_DIR
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
E_ID,
A_IND,
B_IND,
E_DATE date 'MM/DD/YYYY',
E_AMT,
F_DATE date 'MM/DD/YYYY',
D_E_DATE date 'MM/DD/YYYY'
)
)
LOCATION ('data.txt')
);
Sample Data:
[oracle#ora12c Desktop]$ cat data.txt
5560000,N,Y,,24950,10/12/2011,10/27/2011
5550001,Y,Y,11/26/2013,73813,11/18/2013,11/29/2013
5560002,Y,Y,11/6/2015,22041.28,11/6/2015,11/18/2015
5560003,Y,Y,10/10/2012,2768.66,10/10/2012,10/24/2012
5560004,N,Y,,29750,9/30/2013,10/15/2013
5560005,Y,Y,10/8/2015,76474.84,10/8/2015,10/21/2015
5560006,N,Y,,63879.28,11/16/2011,11/30/2011
5560007,N,Y,,100000,11/14/2013,11/21/2013
Output:
SQL> select * from my_date;
E_ID A_I B_I E_DATE E_AMT F_DATE D_E_DATE
---------- --- --- --------- ---------- --------- ---------
5560000 N Y 24950 12-OCT-11 27-OCT-11
5550001 Y Y 26-NOV-13 73813 18-NOV-13 29-NOV-13
5560002 Y Y 06-NOV-15 22041.28 06-NOV-15 18-NOV-15
5560003 Y Y 10-OCT-12 2768.66 10-OCT-12 24-OCT-12
5560004 N Y 29750 30-SEP-13 15-OCT-13
5560005 Y Y 08-OCT-15 76474.84 08-OCT-15 21-OCT-15
5560006 N Y 63879.28 16-NOV-11 30-NOV-11
5560007 N Y 100000 14-NOV-13 21-NOV-13
8 rows selected.
The answer to this question was found in the following thread:
Oracle external table date field - works in one DB and not in another
Transferring the same file from the dev server to prod server seemed to have resolved the issue. Weird, I wish I knew better exactly why this issue occurred and how to resolve it.
I have a question. I have this control files that works fine when I run it from a windows client. However when I run it directly in Linux, it shows load complete but when I look at my oracle data, there is NO DATA and there are even bad records. Below is my control file that works well in windows but fails in Linux.
NOTE: The control file works if I remove the string or date converted fields
Control file
load data
infile 'HOME/INPUT/FILEA.dat'
badfile 'HOME/BAD/FILEA.bad'
discardfile 'HOME/DIS/FILEA.dsc'
truncate
into table TEST
fields terminated by '|'
trailing nullcols
( ABCcode CHAR(11),
ABCID CHAR(6),
ABC_SEQNO "to_number(:ABC_SEQNO,'999999')",
PSNO "to_number(:PSNO,'99999999999.999')",
ABDF CHAR(1),
ABCFI CHAR(1),
ABC_DATE NULLIF ABC_DATE="00000000" "to_date(:ABC_DATE, 'YYYYMMDD')",
XZY_date NULLIF XZY_date="00000000" "to_date(:XZY_date, 'YYYYMMDD')",
DESC CHAR(1))
Any help or ideas to get this code to run in Linux will be appreciated
Notes about the logfile: The logfile had the following
ORA-00604: error occurred at recursive SQL level 1
ORA-12899: value too large for column "ABCschema"."TEST"."ABC_DATE" (actual: 9, maximum: 8)
Also, the date conversion had the following
NULL if ABC_DATE = 0X3030303030303030(character '00000000')
SQL string for column : "to_date(:ABC_DATE, 'YYYYMMDD')"
Your TEST table has the ABC_DATE column defined as VARCHAR2(8), not as a DATE.
If I create a table as:
create table test (
ABCcode VARCHAR2(11),
ABCID VARCHAR2(6),
ABC_SEQNO NUMBER,
PSNO NUMBER,
ABDF VARCHAR2(1),
ABCFI VARCHAR2(1),
ABC_DATE DATE,
XZY_date DATE,
"DESC" VARCHAR2(1)
);
and have a data file with:
A|B|1|2.3|C|D|20140217|20140218|E
then it loads fine. If I recreate the table as:
create table test (
ABCcode VARCHAR2(11),
ABCID VARCHAR2(6),
ABC_SEQNO NUMBER,
PSNO NUMBER,
ABDF VARCHAR2(1),
ABCFI VARCHAR2(1),
ABC_DATE VARCHAR2(8),
XZY_date DATE,
"DESC" VARCHAR2(1)
);
... then the same control file and data file now give me:
Record 1: Rejected - Error on table TEST, column ABC_DATE.
ORA-12899: value too large for column "<schema>"."TEST"."ABC_DATE" (actual: 9, maximum: 8)
You are converting the string value to a date, but then you're doing an implicit conversion back to a string when it actually inserts the data into the VARCHAR2 column. When it does that it's using your NLS_DATE_FORMAT settings, and the error I got was from having that set to DD-MON-RR.
You have three options really. Either modify your table to have actual DATE columns; or change the control file so it just inserts the plain text value and doesn't do the date conversion at all; or massage your environment so the conversion back to a string gets the format you want the string to be.
Only the first one is really sensible - if it's a date value, always store it as a DATE, never as a string.
The 0X30... thing isn't a problem, that's just showing the internal representation it's using.
I'm trying to load some data using sql loader. Here is the top of my control/data file:
LOAD DATA
INFILE *
APPEND INTO TABLE economic_indicators
FIELDS TERMINATED BY ','
(ASOF_DATE DATE 'DD-MON-YY',
VALUE FLOAT EXTERNAL,
STATE,
SERIES_ID INTEGER EXTERNAL,
CREATE_DATE DATE 'DD-MON-YYYY')
BEGINDATA
01-Jan-79,AL,67.39940538,1,23-Jun-2009
... lots of other data lines.
The problem is that sql loader won't recognize the data types I'm specifying. This is the log file:
Table ECONOMIC_INDICATORS, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ASOF_DATE FIRST * , DATE DD-MON-YY
VALUE NEXT * , CHARACTER
STATE NEXT * , CHARACTER
SERIES_ID NEXT * , CHARACTER
CREATE_DATE NEXT * , DATE DD-MON-YYYY
value used for ROWS parameter changed from 10000 to 198
Record 1: Rejected - Error on table ECONOMIC_INDICATORS, column VALUE.
ORA-01722: invalid number
... lots of similiar errors, expected if trying to insert char data into a numeric column.
I've tried no datatype spec, all other numeric specs, and always the same issue. Any ideas?
Also, any ideas on why it's changing the Rows parameter?
From your example, SQL*Loader will try to evaluate the string "AL" to a number value, which will result in the error message you gave. The sample data has something looking like it could be a decimal number at third position, not second as specified int he column list.