SQL LOADER ERROR_0102 - oracle

Below is my Table structure and .CTL FILE & .CSV file while loading data i am always getting error on first row & other data is getting is getting loaded. if i left a complete blank line on first record all data gets inserted.
can you please help us why i am getting error on first record.
TABLE_STRUCTURE
ING_DATA
(
ING_COMPONENT_ID NUMBER NOT NULL,
PARENT_ING_ID NUMBER NOT NULL,
CHILD_ING_ID NUMBER NOT NULL,
PERCENTAGE NUMBER(7,4) NOT NULL
);
CTL FILE
LOAD DATA
INFILE 'C:\Users\pramod.uthkam\Desktop\Apex\Database\SQL LOADER-PROD\ING_COMPONENT\ingc.csv'
BADFILE 'D:\SQl Loader\bad_orders.txt'
INTO TABLE ING_data
FIELDS
TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
ING_Component_ID ,
Parent_ING_ID ,
Child_ING_ID ,
Percentage
)
CSV FILE
1,3,4,95.0000
2,3,5,5.0000
3,6,7,5.0000
4,6,4,95.0000
5,18,19,19.0000
6,18,20,80.0000
7,18,21,1.0000
8,34,35,85.0000
LOG FILE
Record 1: Rejected - Error on table ING_COMPONENT, column ING_COMPONENT_ID.
ORA-01722: invalid number
Table ING_COMPONENT:
7 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 66048 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 1
Total logical records discarded: 0
BAD FILE
1,3,4,95.0000

I tried loading your file by creating it as is. For me it runs fine. All 8 rows got loaded. No issues. I was trying in Red Hat Linux.
Then I tried 2 things.
dos2unix myctl.ctl
Ran SQLLDR. All rows got inserted.
Then I tried:
unix2dos myctl.ctl
RAN SQLLDR again, all 8 records rejected. So what I believe is that, your first record line ending is not as per you sqlplus readable format. When you enter a blank line manually, your default environment (like in my case UNIX) creates the correct line ending, and the records get read. I'm not sure, but I assume this, based on my own try as above.
So lets say you are loading this file in windows(I think this because your path looks like :)) In your csv file, give a blank line in beginning, then remove it, and do the same thing after first record also(give a blank line after first record, then remove it). Then try again. Maybe it works if the issue is what I suspect.

Related

how to use oracle sqlldr to load excel csv file without comma at end

I have csv file as:
1,2,3
control file: fields terminated by ','
however, when I load the data, I got ORA-01722 invalid number.
if I add , at the end of the file like 1,2,3, then all three columns will be loaded, is there any option to load all three columns without ending comma?
Thank you all very much for the help. I did save in windows and loaded into Unix. I checked my file, it has control M, carriage return, once I deleted \r, I was able to load successfully. Thank you all.

Is it possible to remove empty lines in oracle DB?

Our problem is probably regarding already added values in one of the tables in oracle db version 12.1.0
In CMS we are seeing that the values are without empty rows:
But when we copy this text and paste it in notepad for example, we got empty row (line breaks, blank row), after every line:
So, at this point, we are heading to a problem with data imported to the database. The data type of that field is VARCHAR 2000, we have around 10 000 records already in that table and half of them include these empty rows after pasting. Is there any chance that we can remove these empty lines in that column?
You can see in your dump that there is a sequence: 13,13,10 which is Carriage return, Carriage return + Line Feed.
https://www.petefreitag.com/item/863.cfm
If you replace 13,13,10 by 13,10 you should get the desired results.
replace(column_name,chr(13)||chr(13)||chr(10),chr(13)||chr(10))

sqlldr ORA-01722: invalid number because of decimal number in a csv column

I am trying to load data from a csv file into an orcale table.
I am using the sqlldr with an control file
Everything works fine but in some cases the row doesnt get loaded because of an decimal number.
So in oracle Table the column is : Number(10) - this shouldnt be the problem
and my control file looks like this: (their are about 15 more columns but basically its about column quantity_1
OPTIONS (SKIP=1)
LOAD DATA
INFILE *
APPEND
INTO TABLE ..
FIELDS TERMINATED BY ";" OPTIONALLY ENCLOSED BY '"'
(
Quantity_1, Quantity_2, Quantity_3,
)
In the csv file the rows for quantity_1 are like
2.58
4343
232
1212
and for the first row he gives the error:
ORA-01722: invalid number
Can anybody help with this??
With your column defined as number(10), it should have rounded to 3 on insert. I suspect your real data is larger that 2.58. Anyway your column should be defined as number(12,2), that is, a total of 12 digits long, 2 of those to the left of the decimal point. i.e. 9999999999.99

Select statement in hive return some columns with null value

I have seen this type of questions were asked many times, but those solutions not worked for me. I created a external hive table, since i had the data is from map-only job output. Then, by load command i given the path for the specific file. It showed ok. But when i give select * from table command it returns some column with null values. Each command i have executed is in the error pic.
My delimiter in file is ||, so i mentioned the same in create table command too.
Here is my input file pic file pic. And here is the error pic
. I have also tried a normal table instead of external table. That too showed the same error. I also tried by mentioning delimiter as //|| and also \|\|. But none worked.
The problem that you are facing is related to multiple characters as FIELD delimiter.
According to documentation FIELD delimiter should be a CHAR
row_format
: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
[NULL DEFINED AS char] -- (Note: Available in Hive 0.13 and later)
You need to change your data to have only single char field delimiter.
If you can not do that then the other approach is to use stage table with single field. Load your data to that table and then in your actual target table, split the column in stage table by || delimiter and then insert. You need to make sure that field counts are consistent in the data otherwise your final output will be off.
Reference:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTableCreate/Drop/TruncateTable

How to skip the last records using sql-loader?

We are using flat files.but how to skip the last records in flat files.
use the Following in your ctl File if your tail record always starts with T! Haven't tested it in a while but worked for me the last time i worked with loading a staging table.
LOAD DATA
INFILE 'file.dat' BADFILE 'file.bad' DISCARDFILE 'file.dis'
APPEND
INTO TABLE tablename
-- This will skip the tail record
WHEN (01) <> 'T'
(
column names);
You can skip the header rows using the SKIP clause but to skip the last records you will have to use the WHEN clause. Typically, your trailing records (last records) will not be identical to the other records in the file and there should be an indicator to specify that this is a trailer record. You need to construct such a condition in your control file that this condition does not get satisfied.
Here is the Oracle documentation on the WHEN clause.
http://docs.oracle.com/cd/B14117_01/server.101/b10825/ldr_control_file.htm#i1005657
Here are some examples on conditional loading.
http://www.orafaq.com/wiki/SQL*Loader#Conditional_Load
Your original post needs more detail but for completeness the control file options clause has a LOAD = n option that tells sqlldr how many rows to load. If you have 100 rows and don't want to load the last 5, specify LOAD = 95.

Resources