Cursor and CSV using utl_file package - oracle

Hi I want to create a csv file using plsql utl file. For that I am creating cursor in utl file but I dont want to enter duplicate data. Because I want to create that csv file daily from the same table. Please help
I tried by cursor but I have no idea how to restrict duplicate entries because I want to create the csv file from same table on daily basis

A cursor selects data; it is its where clause that filters which data it'll return.
Therefore, set it so that it fetches only rows you're interested in. For example, one option is to use a timestamp column which tells when was that particular row inserted into the table. Cursor would then
select ...
from that_table
where timestamp_column >= trunc(sysdate)
to select data created today. It is up to you to set it to any other value you want.

Related

how to create table definition from csv file and also copy data at the same time

I want to load data from a csv file into Vertica. I don't want to create table and the copy data in two separate steps. Instead, I want to create the table, specify the csv file and then let vertica figure out column definitions (names, data type) itself and then load the data.
Something like create table titanic_train () as COPY FROM '/data/train.csv' PARSER fcsvparser() rejected data as table titanic_train_rejected abort on error no commit;
Is it possible?
I guess that if a table has 100s of columns then automating the create table, column definition and data copy would be much easier/faster than doing these steps separately
It's always several steps, no matter what.
Use the built-in bits of Vertica:
CREATE FLEX TABLE foo();
COPY foo FROM '/data/mycsvs/foo.csv' PARSER fCsvParser();
SELECT COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW('foo');
-- THEN, either:
SELECT * FROM foo_view;
-- OR: create a ROS Table:
CREATE TABLE foo_ros AS SELECT * FROM foo_view;
Get a CSV-to-DDL parser from the net, like https://github.com/marco-the-sane/d2l, and install it then:
$ d2l -coldelcomma -chardelquote -drp -copy /data/mycsvs/foo.csv | vsql
So , in the second instance, it's one step, but it calls both d2l and vsql.

How to take backup as insert queries from oracle select statement inside UNIX batch job?

I wrote a UNIX batch job which updates a table with some "where" conditions. Before updating those records, i need to take the backup (insert statements) of the records that is returned with the "where conditions" and store it in ".dat" file. Could you please help on this???
The most straightforward way to create a backup of the table would be to use a create table statement using the where condition(s) of your update statement. For example, let's take a sample update statement:
UPDATE sometable
SET field1 = 'value'
WHERE company = 'Oracle'
This update would update the field1 column of every row where the company name is Oracle. You could create a backup of sometable by issuing the following command:
CREATE TABLE sometable_backup AS (SELECT * FROM sometable WHERE company = 'Oracle');
This will create a table called sometable_backup that will contain all of the rows that match the where clause of the update.
You can then use Data Pump or another utility to create an export .dat file of that specific table. You can use that .dat file to import into other databases.

Compare text file and table values for insert/update/delete

I have text file which looks like as below,
ID1~name1~city1~zipcode1~position1
ID2~name2~city2~zipcode2~position2
ID3~name3~city3~zipcode3~position3
ID4~name4~city4~zipcode4~position4
.
.
etc goes on...
This text file is the source file and I want split the file (~) and compare the table with ID.
If the value is not in the table, insert operation should perform.
If the id is available in the table but other column values are different then need to update the table.
If the id is not available in the text but available in the table then then the record should get deleted.
I did goggle it but i could find the below page,
https://www.experts-exchange.com/questions/27419804/VBScript-compare-differences-in-two-record-sets.html
Please help me how I can proceed with VBscript.
Whose leg you are trying to pull? Obviously the desired/resulting table is the input table, so use "load data infile" to import the file.

Temp Variables in Oracle SQL Loader

I need to upload data from flat files.
Platform/Version: Oracle 10g/Windows
My Flat file looks like below:
H,1,10302014
P,10.00,ABC
P,15.00,XYZ
P,14.75,BBY
T,3
First Record - Header (Row Indicator, FileType, Date)
second to Fourth - Detal Records (Row Indicator, Amount, Name)
Last Record - Trailer (Row Indicator, Number of Detail Records)
create table Mytable
(Row_ind Varchar2(2),
Amount number(6,2),
name varchar2(15),
file_Dt date);
I need to use the date(10302014) from header record to while inserting the detail records. Is is possible?
Note:
The file size is over a million records and i don't have update
permission on the file (the file is NOT in ASCII format)
If you're on Oracle 9i or above, there's a way to bind a value and use it later in the process, but I'm assuming you can tell the customer how to write or modify the control file.
I'm wondering if that might work where you use multiple inserts (the header record to a table maybe just to bind the column to a date column) and on the succeeding inserts include the bound column. Search Oracle for that on sql*loader. I found part of it here.

How to create table dynamically based on the uploaded csv file column header using oracle apex

Based in the csv file column header it should create table dynamically and also insert records of that csv file into the newly create table.
Ex:
1) If i upload a file TEST.csv with 3 columns, it should create a table dynamically with three
2) Again if i upload a new file called TEST2.csv with 5 columns, it should create a table dynamically with five columns.
Every time it should create a table based on the uploaded csv file header..
how to achieve this in oracle APEX..
Thanks in Advance..
Without creating new tables you can treat the CSVs as tables using a TABLE function you can SELECT from. If you download the packages from the Alexandria Project you will find a function that will do just that inside CSV_UTIL_PKG (clob_to_csv is this function but you will find other goodies in here).
You would just upload the CSV and store in a CLOB column and then you can build reports on it using the CSV_UTIL_PKG code.
If you must create a new table for the upload you could still use this parser. Upload the file and then select just the first row (e.g. SELECT * FROM csv_util_pkg.clob_to_csv(your_clob) WHERE ROWNUM = 1). You could insert this row into an Apex Collection using APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY to make it easy to then iterate over each column.
You would need to determine the datatype for each column but could just use VARCHAR2 for everything.
But if you are just using generic columns you could just as easily just store one addition column as a name of this collection of records and store all of the uploads in the same table. Just build another table to store the column names.
Simply store this file as BLOB if structure is "dynamic".
You can use XML data type for this use case too but it won't be very different from BLOB column.
There is a SecureFile feature since 11g, It is a new BLOB implementation, it performs better than regular BLOB and it is good for unstructured or semi structured data.

Resources