Suppose you have a table:
CREATE TABLE Customer
(
batch_id NUMBER,
customer_name VARCHAR2(20),
customer_address VARCHAR2(100)
)
And suppose you have a control file to populate this table:
LOAD DATA INFILE 'customers.dat'
REPLACE
INTO TABLE Customer
(
batch_id ??????,
customer_name POSITION(001:020),
customer_address POSITION(021:120)
)
Is it possible to pass a value for batch_id to my control file when I run SQL*Loader? For example, is it possible to specify a bind variable (turning the question marks into :MY_AWESOME_BATCH_ID)?
A relatively easy way to archive that is to create a stored function that returns the batch number and use it in the loader file.
create or replace function getBatchNumber return number as
begin
return 815;
end;
/
LOAD DATA INFILE 'customers.dat'
REPLACE
INTO TABLE Customer
(
batch_id "getBatchNumber",
customer_name POSITION(001:020),
customer_address POSITION(021:120)
)
Not easily, if I remember right, but here are a couple of alternatives:
If there's only going to be one process running SQLLoader at a time, use nulls or a fixed value and then run a SQLPlus script as part of the process afterwards to do the update to a sequence value.
Call a script which will grab the next sequence value for your batch ID and then spool out the control file, including the batch_id constant.
If it's acceptable to have BATCH_ID values generated automatically by incrementing on each load, than this worked for me. The 10 minutes interval in the sample would need to be adjusted to the specific load - to be accurate, the loading must complete within the specified interval and the next loading must not be started in less than time specified.
A drawback is that it slows down noticeably on large volumes - that's the price running the MAX aggregate on every line.
LOAD DATA
...
INTO TABLE XYZ
(
...
BATCH_ID expression "(select nvl(max(batch_id) + 1, 1) from extra_instruments_party_to where create_date < (sysdate - interval '10' minute))",
CREATE_DATE SYSDATE
)
Related
I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.
Incoming file format mainframe/cobol record layout, one single record that is more than 21000 characters long. Please be aware of the occurs 350 times , which is making the record length very long, a horizontal layout, instead of a row-like layout in incoming file.
id pic x(23).
idnum pic 9(04).
filler pic x(10).
grp occurs 350 times
grpkey1 PIC X(25).
grpkeynum PIC X(09).
grpsubkey PIC X(01).
grptyp PIC X(01).
grpst PIC X(08).
grpend PIC X(08).
filler PIC X(10).
Target Table Definition (Preferably Oracle External Table)
create table grpkeys (
id CHAR(23),
idnum CHAR(04),
filler10 CHAR(10),
grpkey1 CHAR(25),
grpkeynum CHAR(09),
grpsubkey CHAR(01),
grptyp CHAR(01),
grpst CHAR(08),
grpend CHAR(08),
filler20 CHAR(10)
)
I have to load above record format in a file into a table (preferably a working oracle external table, if possible). id, idnum, filler10 values need to be copied into all 350 records created in oracle table (preferably external table) for a single record of incoming file. Please suggest the easiest way to accomplish this.
I'll stick to 5 columns for this example, but there should be no syntactic nor performance restriction to scale from 005 up to 350. Example assumes your file is on Oracle database server in file /tmp/test/test.txt.
My recommendation is to use an Oracle external table definition that effectively reads the data file "as is" (in all its 350-column glory), but does not worry about parsing out the 350-repeat field into components like grpkey1, grpsubkey, etc.
CREATE DIRECTORY TEST_DIR AS '/tmp/test';
CREATE TABLE TEST_XT
(
id VARCHAR2(23),
idnum INTEGER,
filler10 VARCHAR2(10),
GRP_001 varchar2(62),
GRP_002 varchar2(62),
GRP_003 varchar2(62),
GRP_004 varchar2(62),
GRP_005 varchar2(62)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TEST_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS
(
ID CHAR(23),
IDNUM INTEGER EXTERNAL (4),
FILLER10 CHAR(10),
GRP_001 CHAR(62),
GRP_002 CHAR(62),
GRP_003 CHAR(62),
GRP_004 CHAR(62),
GRP_005 CHAR(62)
)
)
LOCATION ('test.txt')
);
Then wrap that external table definition with a view definition that performs an UNPIVOT operation (plus some basic SUBSTR functions to slice each of the 350 unpivoted GRP fields into its constituent pieces).
CREATE OR REPLACE VIEW TEST_V AS
SELECT ID, IDNUM, FILLER10, GRP_NUM,
SUBSTR(GRP_STR,01,25) GRPKEY1,
SUBSTR(GRP_STR,26,09) GRPKEYNUM,
SUBSTR(GRP_STR,35,01) GRPSUBKEY,
SUBSTR(GRP_STR,36,01) GRPTYP,
SUBSTR(GRP_STR,37,08) GRPST,
SUBSTR(GRP_STR,45,08) GRPEND,
SUBSTR(GRP_STR,53,10) FILLER20
FROM
(
SELECT *
FROM TEST_XT
UNPIVOT
(GRP_STR FOR GRP_NUM IN
(
GRP_001 as 1,
GRP_002 as 2,
GRP_003 as 3,
GRP_004 as 4,
GRP_005 as 5
)
)
);
Of course, you can query view directly, or load to standard table (insert into standard_table select * from test_v) for indexing, partitioning, etc needs.
Also, you can scale to desired level of performance by adding parallelism to external table:
ALTER TABLE TEST_XT PARALLEL 8;
I have a select query running on an oracle database that takes only about fifteen seconds to run. However when I try to create a table from this, either using a create select as or creating a blank table and inserting the rows, the create just keeps running and running (so far I've waited up to fifteen minutes with no result).
Below is my attempt at the create blank then insert method, which shows the structure of the table I'm creating and the data I'm trying to insert:
CREATE TABLE MYNEWTABLE
(mykey number(10), brand varchar2(255), day_id number(10), adateone date, p_id number (10), startdate date, enddate date, another_day_id number(10))
INSERT INTO MYNEWTABLE
select ns.mykey, ns.brand, oc.day_id, oc.day_date as adateone, tbut.p_id, tbut.startdate, tbut.enddate, cust.another_day_id
from TABLE_1 ns
RIGHT JOIN TABLE_2 tbut
ON ns.mykey = tbut.mykey
LEFT JOIN
TABLE_3 cust
ON ns.mykey = cust.mykey
LEFT JOIN DATE_TABLE oc
on cust.first_del_day_id = oc.day_id
where ns.brand = 'SOME VALUE'
What is the cause of the table's creation being so slow and how can I improve this?
Many thanks.
When any session hangs, the appropriate thing to check is V$SESSION_WAIT.
Execute,
select * from v$session_wait where sid = <your sid>
Depending on the result of the wait, you need to figure out which session is holding the lock you are waiting for.
Looks like your SQL runs using nested loops. So 15 seconds it is time to return first N rows (where n depends on your client tool). When you start to fetch all rows and insert into new table it takes more time.
I want to have a column in table which calculates the value depending on the other column in the same table.
For example,
I have "Validity" Table with Columns "DateManufactured", "DateExpires"
The Date Expires column must calculate value suppose adding 30 days for Datemanufactured.
How Can we do this in Visual Studio2010->DataSet Design-> DataTable Column-> Properties->Expression
See relevant Image here
What could be the possible expression for this in terms of SQL Server Expressions?
Please Suggest optimal Solution.
Thanks in advance.
I believe you are looking for DateAdd
SELECT DATEADD(day, 30, '15 Dec 1988')
select dateadd(day,30,getdate())
this will take the current date(getdate())
add 30 days to it.
I would suggest creating a stored procedure to insert your data into your table with parameters of the values that needs to be inserted. you can then do your calculation based on your date parameter. Example:
Create Procedure InsertValidity
(
#City varchar(20),
#Area varchar(20),
...
#DatePosted datetime,
...
#UserID
)
as
declare #DurationFrom datetime
set #DurationFrom = (select DATEADD(dd,30,#DatePosted)
insert into Validity (City, Area, ..., DatePosted, ... UserID)
values(#City, #Area, ..., #DatePosted,...#DurationFrom,...#UserID)
This should solve your problem. Just complete the script by replacing the ... with your other data then execute the stored procedure in your application.
I have to make a process in Oracle/PLSQL. I have to verify that the interval of time between start_date and end_date from a new row that I create must not intersect other start_dates and end_dates from other rows.
Now I need to check each row for that condition and if it doesn't correspond the repetitive instruction should stop and after that to display a message such as "The interval of time given is not correct".
I don't know how to make repetitive instructions in Oracle/PLSQL and I would appreciate if you would help me.
I need a loop or smth like that to verify each row in my table that the interval of time given by the date_hour_i and date_hour_e does not intersect the other intervals of time given by the rest of the rows. One more specification....the dates from each row correspond to a client and a employee that performs a haircut to the client in the given interval of time....and i want somehow not to let to introduce a new row if for the same client(or other client) and employee, the new interval of time intersects the other intervals of time with the same/other client and employee....i hope i made myself clear...
Two links for your reading pleasure:-
Time intervals with no overlaps
and
Avoiding overlap values...
why check each row? just query for the start and end times. if the result > 0, output the error message, else, insert.
i assume this will be during the BEFORE INSERT OR UPDATE trigger.
you will want to query the existing table for overlaps in the dates - but this will give a mutating trigger error.
You can get around this by using PRAGMA AUTONOMOUS_TRANSACTION to spawn a new thread.
alternately - you could save each date range in a secondary table, and use that to query against on each insert... something like the following (uncompiled)
CREATE OR REPLACE TRIGGER mytrigger
BEFORE INSERT OR UPDATE ON mytable FOR EACH ROW
DECLARE
cnt number;
BEGIN
SELECT count(*) into cnt
FROM reserved_date_range
WHERE :new.begin_date BETWEEN begin_dt and end_dt
if ( cnt > 0 ) then
raise_application_error(-20000,'Overlapping date ranges');
else
insert into reserved_date_range( begin_dt, end_dt )
values ( :new.begin_date, :new.end_date );
end if;
End;
/
Say your table is tab1 and the start date is stdate and end date is endate
also let new start date and new end date be in PLSQL variables v_stdate and v_endate.
so your insert can be something like
insert into tab1 (stdate,endate)
select v_stdate,v_endate from dual
where not exists(
select 'overlap' from tab1 t1
where v_stdate between(t1.stdate and nvl(t1.endate,v_endate)
or v_endate between(t1.stdate and nvl(t1.endate,v_endate)
)
The solution to this problem is a bit complicated because of concurrency issues. In your case you are scheduling an event (or a resource).So I suppose you have a table that holds resource (say client). Before you add another schedule (or event) for a client you should lock the particular client record like.
select client_id from Clients where client_id=p_client_id for update;
Then you can verify there are no overlaps and insert the new schedule and commit.At this point the lock will be released.Any solution that does not use a serialization object is bound to be flawed due to concurrency issues.You can do it in your PLSQL or in a After Insert trigger.But it is an absolute must to lock the actual resource record.