Temp Variables in Oracle SQL Loader - oracle

I need to upload data from flat files.
Platform/Version: Oracle 10g/Windows
My Flat file looks like below:
H,1,10302014
P,10.00,ABC
P,15.00,XYZ
P,14.75,BBY
T,3
First Record - Header (Row Indicator, FileType, Date)
second to Fourth - Detal Records (Row Indicator, Amount, Name)
Last Record - Trailer (Row Indicator, Number of Detail Records)
create table Mytable
(Row_ind Varchar2(2),
Amount number(6,2),
name varchar2(15),
file_Dt date);
I need to use the date(10302014) from header record to while inserting the detail records. Is is possible?
Note:
The file size is over a million records and i don't have update
permission on the file (the file is NOT in ASCII format)

If you're on Oracle 9i or above, there's a way to bind a value and use it later in the process, but I'm assuming you can tell the customer how to write or modify the control file.
I'm wondering if that might work where you use multiple inserts (the header record to a table maybe just to bind the column to a date column) and on the succeeding inserts include the bound column. Search Oracle for that on sql*loader. I found part of it here.

Related

Cursor and CSV using utl_file package

Hi I want to create a csv file using plsql utl file. For that I am creating cursor in utl file but I dont want to enter duplicate data. Because I want to create that csv file daily from the same table. Please help
I tried by cursor but I have no idea how to restrict duplicate entries because I want to create the csv file from same table on daily basis
A cursor selects data; it is its where clause that filters which data it'll return.
Therefore, set it so that it fetches only rows you're interested in. For example, one option is to use a timestamp column which tells when was that particular row inserted into the table. Cursor would then
select ...
from that_table
where timestamp_column >= trunc(sysdate)
to select data created today. It is up to you to set it to any other value you want.

How do I change column type in Apex 20.1 from varchar to float?

I imported a Excel spreadsheet into Apex 20.1. Several of my columns were formatted as currency eg. $30 but they've all been imported as varchar. Many of the cells were empty so maybe that's why it didn't automatically recognise the column type.
There didn't seem to be a way to alter the table definition before it loaded the data either.
To change it back to a numeric type ( ? float ), do I need to add another column, import the data from the original column recast as float, delete the first column and then rename the new column back to the original name?
I also want to truncate some varchar(4000) down to varchar(100) but that's another question.
If column looked exactly like $30, then yes - it is a string, not a number, so Oracle created a VARCHAR2 column. If you removed formatting in Excel before doing anything with Apex, it would have been a NUMBER.
Now, yes - as you said:
create a new column
update table and set new column to numeric value of the old column
drop the old column
rename new column to the "old/original" name
Or, alternatively (from point #3)
empty the old column
modify its datatype
move data back into it
drop the new column
As of truncating varchar2(4000) column to (100): first shorten its contents to 100 characters, then modify its datatype:
update your_table set that_column = substr(that_column, 1, 100);
alter your_table modify that_column varchar2(100);
Because of such problems, I prefer to
precreate the target table (using create table command), specifying each column's datatype as I want it to be
load file contents into such a table

Replace specific junk characters from column in hive

I've an issue where one of the column loaded in a hive table contains junk character ("~) in a column suffixed with actual value (ABC). So the actual value that's visible for this column is (ABC"~).
This column can have either ABC (or any such string) or NULL. The table is huge and Update is not an option here.
I've thought of a solution of creating a temp table with this column containing either the string (ABC) or NULL, thereby want to remove this junk character ("~) completely while copying the data from original table to this temp table.
Any help on how I can remove this junk? I tried using regexp function, but no success. Any suggestions?
I was not using regexp properly; my fault.
The data loaded initially in the table had the extra characters attached to a column's values. For Ex: If the column's actual value was Adf452, then the data contained in the cell was Adf452"~.
So I loaded the data to a temp table like this:
insert overwrite table tempTable select colA, colB, colC, regexp_replace(colC,"\"~",""), partitionedCol from origTable;
This simply loaded the data in tempTable without those junk characters.

Insert part of data from csv into oracle table

I have a CSV (pipe-delimited) file as below
ID|NAME|DES
1|A|B
2|C|D
3|E|F
I need to insert the data into a temp table where I already have SQLLODER in place, but my table have only one column. The below is the control file configuration for loading from csv.
OPTIONS (SKIP=1)
LOAD DATA
CHARACTERSET UTF8
TRUNCATE
INTO TABLE EMPLOYEE
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS
(
NAME
)
How do I select the data from only 2nd column from the csv and insert into only one column in the table EMPLOYEE?
Please let me know if you have any questions.
If you're using a filler field you don't need to have a matching column in the database table - that's the point, really - and as long as you know the field you're interested in is always the second one, you don't need to modify the control file if there are extra fields in the file, you just never specify them.
So this works, with just a filler ID field added and the three-field data file you showed:
OPTIONS (SKIP=1)
LOAD DATA
CHARACTERSET UTF8
TRUNCATE
INTO TABLE EMPLOYEE
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS
(
IF FILLER,
NAME
)
Dmoe'd with:
SQL> create table employee (name varchar2(30));
$ sqlldr ...
Commit point reached - logical record count 3
SQL> select * from employee;
NAME
------------------------------
A
C
E
Adding more fields to the data file makes no difference, as long as they are after the field you are actually interested in. The same thing works for external tables, which can be more convenient for temporary/staging tables, as long as the CSV file is available on the database server.
Columns in data file which needs to be excluded from load can be defined as FILLER.
In given example use following. List all incoming fields and add filler to those columns needs to be ignored from load, e.g.
(
ID FILLER,
NAME,
DES FILLER
)
Another issue here is to ignore header line as in CSV so just use OPTIONS clause e.g.
OPTIONS(SKIP=1)
LOAD DATA ...
Regards,
R.

How to create table dynamically based on the uploaded csv file column header using oracle apex

Based in the csv file column header it should create table dynamically and also insert records of that csv file into the newly create table.
Ex:
1) If i upload a file TEST.csv with 3 columns, it should create a table dynamically with three
2) Again if i upload a new file called TEST2.csv with 5 columns, it should create a table dynamically with five columns.
Every time it should create a table based on the uploaded csv file header..
how to achieve this in oracle APEX..
Thanks in Advance..
Without creating new tables you can treat the CSVs as tables using a TABLE function you can SELECT from. If you download the packages from the Alexandria Project you will find a function that will do just that inside CSV_UTIL_PKG (clob_to_csv is this function but you will find other goodies in here).
You would just upload the CSV and store in a CLOB column and then you can build reports on it using the CSV_UTIL_PKG code.
If you must create a new table for the upload you could still use this parser. Upload the file and then select just the first row (e.g. SELECT * FROM csv_util_pkg.clob_to_csv(your_clob) WHERE ROWNUM = 1). You could insert this row into an Apex Collection using APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY to make it easy to then iterate over each column.
You would need to determine the datatype for each column but could just use VARCHAR2 for everything.
But if you are just using generic columns you could just as easily just store one addition column as a name of this collection of records and store all of the uploads in the same table. Just build another table to store the column names.
Simply store this file as BLOB if structure is "dynamic".
You can use XML data type for this use case too but it won't be very different from BLOB column.
There is a SecureFile feature since 11g, It is a new BLOB implementation, it performs better than regular BLOB and it is good for unstructured or semi structured data.

Resources