External Table Columns - oracle

I have the following code. Now the issue is that we expect three columns in the file but sometime the other team sends us 4 columns. So instead of failing the load, it will load first three columns. When the file has less than 3 columns, then it fails which is expected. What logic do I need to place where it fails when an extra column is present in the file?
CREATE TABLE TESTING_DUMP (
"FIELD_1" NUMBER,
"FIELD_2" VARCHAR2(5),
"FIELD_3" VARCHAR2(5)
)
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY MY_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
BADFILE "MY_DIR":"TEST.bad"
LOGFILE "MY_DIR":"TEST.log"
READSIZE 1048576
FIELDS TERMINATED BY "|" LDRTRIM
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(
"LOAD" CHAR(1),
"FIELD_1" CHAR(5),
"FIELD_2" INTEGER EXTERNAL(5),
"FIELD_3" CHAR(5)
)
)
location
(
'Test.xls'
)
)REJECT LIMIT 0;
File Test.xls has sample content below. Second line is correct. It should fail for first line but it does not.
|11111|22222|33333|AAAAA
|22222|33333|44444|

I wouldn't know how to do that in single step, so I'll suggest a workaround - see if it helps.
This is target table, which is - at the end - supposed to contain valid rows only:
SQL> create table ext_target
2 (col1 number,
3 col2 varchar2(5),
4 col3 varchar2(5));
Table created.
External table contains only one column which will contain the whole row (i.e. no separate columns):
SQL> create table ext_dump
2 (col varchar2(100))
3 organization external (
4 type oracle_loader
5 default directory ext_dir
6 access parameters (
7 records delimited by newline
8 fields terminated by ','
9 missing field values are null
10 (
11 col char(100) )
12 )
13 location ('test.txt')
14 )
15 reject limit unlimited;
Table created.
This is the whole file contents:
|11111|22222|33333|AAAAA
|22222|33333|44444|
|55555|66666|
External table contains the whole file (nothing is rejected):
SQL> select * from ext_dump;
COL
--------------------------------------------------------------------------------
|11111|22222|33333|AAAAA
|22222|33333|44444|
|55555|66666|
Insert only valid rows into the target table (so far, there are two conditions: there shouldn't be 4 "columns", and there can be only 4 | separators:
SQL> insert into ext_target (col1, col2, col3)
2 select regexp_substr(col, '\w+', 1, 1),
3 regexp_substr(col, '\w+', 1, 2),
4 regexp_substr(col, '\w+', 1, 3)
5 from ext_dump
6 where regexp_substr(col, '\w+', 1, 4) is null
7 and regexp_count(col, '\|') = 4;
1 row created.
The only valid row:
SQL> select * from ext_target;
COL1 COL2 COL3
---------- ----- -----
22222 33333 44444
SQL>
Now, you can adjust the where clause any way you want; what I posted is just an example.

How about adding a fourth field definition that maps to a datatype sure to fail, like a date column with a funky format sure not to be seen? The "MISSING FIELD VALUES ARE NULL" should render it NULL when not present, and the datatype conversion should error when it is present.

Related

Empty value is giving wrong Value after being mapped

I have Three Tables all of them have a column called RECORD_TYPE which is present as empty records in the Source file.
Table 1:
|RECORD_TYPE|
----------
|('null') |
Table 2:
|RECORD_TYPE|
----------
|"" |
Table 3 :
|RECORD_TYPE|
----------
|"" |
The Column mapped has the same datatype throughout the three tables i.e. VARCHAR2(255)
Table 1 shows correct result --> ('null') but the other two are giving empty inverted commas.
DDL for Table 1 :
CREATE TABLE "ODSSTAGE"."INXN_LA_RENEWAL_STG"
( col1,
col2,
.
.
"RECORD_TYPE" VARCHAR2(255 BYTE)
)
DDL for Table 2 :
CREATE TABLE "ODSSTAGE"."INXN_LA_TERMINATION_STG"
( col1,
col2,
.
.
.
"RECORD_TYPE" VARCHAR2(255 BYTE)
)
DDL for Table 3 :
CREATE TABLE "ODSSTAGE"."INXN_LA_NEW_STG"
( col1,
col2,
..
..
..
"RECORD_TYPE" VARCHAR2(255 BYTE)
)
Source are same for all three
Please keep in mind : the "" is something the record is showing as empty but when i copy paste the record in notepad or somewhere is else i am getting ""
What do you get if you use SELECT DUMP(record_type) FROM ODSSTAGE.INXN_LA_RENEWAL_STG or SELECT DUMP(record_type) FROM ODSSTAGE.INXN_LA_NEW_STG?
just plain NULL and for second I get Typ=1 Len=1: 13
You do not have a NULL (empty) value in the ODSSTAGE.INXN_LA_NEW_STG table you have a single carriage return (ASCII 13) character.
You can find those rows using:
SELECT *
FROM ODSSTAGE.INXN_LA_NEW_STG
WHERE record_type = CHR(13);
Or update them to a NULL value using:
UPDATE ODSSTAGE.INXN_LA_NEW_STG
SET record_type = NULL
WHERE record_type = CHR(13);

Insert statement with blank values without defining all columns

I have a need to insert 100+ rows of data into a table that has 25 text columns.
I only want to insert some data into those columns and the rest be represented by a white space.
(Note: Text fields on PeopleSoft tables are defined as NOT NULLABLE, with a single white space character used to indicate no data instead of null.)
Is there a way to write an insert statement that does not define all the columns along with the blank space. As an example:
INSERT INTO CUST.RECORD(BUSINESS_UNIT, PROJECT_ID, EFF_STATUS, TMPL, DESCR) VALUES('TOO1','PROJ1','A','USA00','USA00 CONTRACT');
For every other column in CUST.RECORD I'd like to insert ' ' without defining the column or the space in the insert.
One way is to set a Default value in table definition like this:
CREATE TABLE CUST.RECORD(
id NUMBER DEFAULT detail_seq.NEXTVAL,
master_id varchar2(10) DEFAULT ' ',
description VARCHAR2(30)
);
Edit: for your table you can use :
alter table CUST.RECORD modify( col2 varchar2(10) default ' ' );
You do not have to supply a value for a specific column IF either condition is true:
The column is defined as nullable. That is, it was NOT defined with the 'not null' clause.
or
The column is defined with a default value
SQL> create table my_test (my_id number not null,
2 fname varchar2(10), -- nullable
3 dob date default sysdate -- default value
4 )
5 ;
Table created.
SQL> --
SQL> -- only supplying value for my_id
SQL> insert into my_test(my_id) values (1);
1 row created.
SQL> --
SQL> -- and see the results
SQL> select *
2 from my_test;
MY_ID FNAME DOB
1 12-MAR-21
1 row selected.
SQL> --
SQL> select my_id,
2 nvl(fname,'NULL'),
3 dob
4 from my_test;
MY_ID NVL(FNAME, DOB
1 NULL 12-MAR-21
1 row selected.

external table data error in toad for oracle

I am trying to create an external table in toad but getting the error shown below:
Here is my code for the external table, it is executed successfully but when I click on the data tab in toad it gives error as shown in the above screenshot.
CREATE TABLE emp_load
( country_id CHAR(5),
country_name VARCHAR(50),
region_id number
)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY OUTER
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
FIELDS (country_id CHAR(2),
country_name VARCHAR(40),
region_id number
)
)
LOCATION ('externalfile1.csv')
);
Here's an example which, actually, works. See if it helps.
My CSV file:
HR,Croatia,385
SLO,Slovenia,386
Create external table - don't forget to
create directory (as Oracle object, using SYS account)
grant read (and write?) privileges on that directory to user who will be using it
.
SQL> create table emp_load
2 (country_id varchar2(5),
3 country_name varchar2(50),
4 region_id varchar2(5)
5 )
6 organization external
7 (type oracle_loader
8 default directory ext_dir
9 access parameters
10 (records delimited by newline
11 fields terminated by ','
12 (country_id char(5),
13 country_name char(50),
14 region_id char(5)
15 )
16 )
17 location ('externalfile1.txt')
18 )
19 reject limit unlimited;
Table created.
SQL> select * from emp_load;
COUNT COUNTRY_NAME REGIO
----- -------------------------------------------------- -----
HR Croatia 385
SLO Slovenia 384
SQL>

How to read field with comma through Oracle external table

I have a position separated text file which I have to read through Oracle external tables. In that position separated file I field as name separated by comma. For example:
123 abc,def 456. So I have to insert data into 3 columns. Fisrt column would have 123, second column would have abc,def and third column would have 456. In access parameter I have given "records delimited by newline". But while selecting data from external table, it only gives data before comma (abc). But I want to read abc,def. Can anybody help me with this?
The following works for me;
CREATE TABLE test
(
col_1 NUMBER,
col_2 VARCHAR2(30),
col_3 NUMBER
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY MY_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY '\r\n'
FIELDS TERMINATED BY ' '
OPTIONALLY ENCLOSED BY '"'
MISSING FIELD VALUES ARE NULL
( col_1, col_2, col_3
)
)
LOCATION (MY_DIR:'test.txt')
)
REJECT LIMIT UNLIMITED;
Bear in mind that "records delimited by '/r/n'" may be specific to Windows but my results are as below;
SQL> select * from test
2 /
COL_1 COL_2 COL_3
---------- ------------------------------ ----------
123 ABC,DEF 123
789 ABCD,EF 123
456 A,B,C,DEF 123

How to remove value of the column from flat files or replace column with some other value while load data from flat files in oracle tables

I have one temp table which is empty now. I want to load the data from that flat file to the oracle temp table. In one column col3 of the flat file mention as "X" but in the table i want to insert as "abc". If possible to remove column value from "X" in flat file then how it is possible? or replace value from "X" to "abc".
SQL*Loader lets you apply SQL operators to fields, so you can manipulate the value from the file.
Let's say you have a simple table like:
create table your_table(col1 number, col2 number, col3 varchar2(3));
and a data file like:
1,42,xyz
2,42,
3,42,X
then you could make your control file replace an 'X' value in col3 with the fixed value 'abc' using a case expression:
load data
replace
into table your_table
fields terminated by ',' optionally enclosed by '"'
trailing nullcols
(
col1,
col2,
col3 "CASE WHEN :COL3 = 'X' THEN 'abc' ELSE :COL3 END"
)
Running that file through with that control file inserts three rows:
select * from your_table;
COL1 COL2 COL
---------- ---------- ---
1 42 xyz
2 42
3 42 abc
The 'X' has been replaced, the other values are retained.
If you want to 'remove' the value, rather than replacing it, you could do the same thing but with null as the fixed value:
col3 "CASE WHEN :COL3 = 'X' THEN NULL ELSE :COL3 END"
or you could use nullif or defaultif:
col3 nullif(col3 = 'X')
DECODE, right?
SQL> create table test (id number, col3 varchar2(20));
Table created.
SQL> $type test25.ctl
load data
infile *
replace into table test
fields terminated by ',' trailing nullcols
(
id,
col3 "decode(:col3, 'x', 'abc', :col3)"
)
begindata
1,xxx
2,yyy
3,x
4,123
SQL>
SQL> $sqlldr scott/tiger#orcl control=test25.ctl log=test25.log
SQL*Loader: Release 11.2.0.2.0 - Production on ╚et O×u 29 12:57:56 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Commit point reached - logical record count 3
Commit point reached - logical record count 4
SQL> select * From test order by id;
ID COL3
---------- --------------------
1 xxx
2 yyy
3 abc
4 123
SQL>

Resources