Oracle insert procedure inserts only 1 row instead of bulk - oracle

I am attempting to grab a variable max date from a table, then use that variable to insert the records into another table that are greater than the variable max date. I have created the procedure and tested it but it only inserts 1 record each time I run the procedure as a scheduled job through dbms_scheduler to run every 30 minutes. My test case allowed for the first run to insert 6 rows, after the first job run it only inserted 1 record of the 6 records. Then the next run inserted 1 record...etc. I am testing this to ultimately be used in concept to insert append a few thousand rows every 30 minutes as a scheduled job. What is the most effective way to run this type of procedure quickly and bulk insert the rows. I was considering altering the table to nologging and dropping any indexes and rebuild them after the insert. What is the best approach, thank you in advance.
Here is my code:
create or replace procedure update_cars
AS
v_date date;
begin
execute immediate 'alter session set NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'';
select max(inventory_date) into v_date from car_equipment;
insert /*+APPEND*/ into car_equipment(count_cars,equipment_type,location,inventory_date,count_inventory)
select count_cars,equipment_type,location,inventory_date,count_inventory
from car_source where inventory_date > v_date;
end;

Why are you altering session? What benefit do you expect from it?
Code you wrote can be "simplified" to
create or replace procedure update_cars
as
begin
insert into car_equipment (count_cars,, equipment_type, ...)
select s.count_cars, s.equipment_type, ...
from car_source s
where inventory_date > (select max(e.inventory_date) from car_equipment e);
end;
If code inserts only one row, then check date values from both car_equipment and car_source tables. Without sample data, I'd say that everything is OK with code (at least, it looks OK to me).
If you'll be inserting a few thousand rows every 30 minutes, that shouldn't be a problem as Oracle is capable of handling that easily.

Related

Automatically delete new records with specific values - ORACLE

I have a table in a database (Oracle 11g) that receives roughly 45000 new records each day. Our organization has roughly 15 items (is a predetermined be a static unique value for each) and I am looking to either delete these records automatically or change a specific value in the these records columns before my batch job packages these transactions and sends them off. Any suggestions on the best way to do this? These transactions are only 10-20 of the 45000 so checking each time they are entered seems like it may have to much cost. To add the values come periodically through the day via DTS package from SQL 2000 server; and yes 2000 is end of life an we will be upgrading early next year.
sample below - accept only +0 values, if a value is less than 0 we change it to 99999;
create table my_table (val int);
create or replace trigger my_trigger
before insert on my_table
for each row
declare
begin
if :new.val <0 then:new.val := 99999; end if;
end my_trigger;
insert into my_table values(0);
insert into my_table values(1);
insert into my_table values(-1);
select * from my_table
1 0
2 1
3 99999
if want to prevent insert of the "wrong" values - "silent insert reject" is not recommended, you'd better need either raise an exception in trigger or set a constraint, see discussion there: Before insert trigger

PL/SQL Append_Values Hint gives error message

I am having trouble doing a large number of inserts into an Oracle table using PL/SQL. My query goes row-by-row and for each row the query makes a calculation to determine the number of rows it needs to insert into the another table. The conventional inserts work but the code takes a long time to run for a large number of rows. To speed up the inserts I tried to use the Append_Values hint as in the following example:
BEGIN
FOR iter in 1..100 LOOP
INSERT /*+ APPEND_VALUES*/ INTO test_append_value_hint values (iter);
END LOOP;
END;
I get the following error message when doing this:
ORA-12838: cannot read/modify an object after modifying it in parallel
ORA-06512: at line 3
12838. 00000 - "cannot read/modify an object after modifying it in parallel"
*Cause: Within the same transaction, an attempt was made to add read or
modification statements on a table after it had been modified in parallel
or with direct load. This is not permitted.
*Action: Rewrite the transaction, or break it up into two transactions
one containing the initial modification and the second containing the
parallel modification operation.
Does anyone have ideas of how to make this code work, or how to quickly insert large numbers of rows into another table?
You get this error because every your INSERT executes as a separate DML statement. Oracle prevents read/write on the table where data were added using direct path insert until commit.
Technically you can use PL/SQL collections and FORALL instead:
SQL> declare
2 type array_t is table of number index by pls_integer;
3 a_t array_t;
4 begin
5 for i in 1..100 loop
6 a_t(i) := i;
7 end loop;
8 forall i in 1..100
9 insert /*+ append_values */ into t values (a_t(i));
10 end;
11 /
But the question Justin asked is in action - where are your data coming from and why can't you use usual INSERT /*+ append */ INTO ... SELECT FROM approach ?
Hi Request you to use commit after insert as below:
BEGIN
FOR iter in 1..100 LOOP
INSERT /*+ APPEND_VALUES*/ INTO test_append_value_hint values (iter);
COMMIT;
END LOOP;
END;
We cannot execute 2 DML transactions in a table without committing the first transaction. And hence this error will be thrown.
SO, commit your previous transaction in that table and continue the second transaction.

Fastest way to insert a million rows in Oracle

How can I insert more than a million rows in Oracle in optimal way for the following procdeure? It hangs if I increase FOR loop to a million rows.
create or replace procedure inst_prc1 as
xssn number;
xcount number;
l_start Number;
l_end Number;
cursor c1 is select max(ssn)S1 from dtr_debtors1;
Begin
l_start := DBMS_UTILITY.GET_TIME;
FOR I IN 1..10000 LOOP
For C1_REC IN C1 Loop
insert into dtr_debtors1(SSN) values (C1_REC.S1+1);
End loop;
END LOOP;
commit;
l_end := DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('The Procedure Start Time is '||l_start);
DBMS_OUTPUT.PUT_LINE('The Procedure End Time is '||l_end);
End inst_prc1;
Your approach will lead to memory issues. Fastest way will be this [Query edited after David's comment to take care of null scenario] :
insert into dtr_debtors1(SSN)
select a.S1+level
from dual,(select nvl(max(ssn),0) S1 from dtr_debtors1) a
connect by level <= 10000
A select insert is the fastest approach as everything stays in RAM.
This query can become slow if it slips into Global temp area but then that needs DB tuning . I don't think there can be anything faster than this.
Few more details on memory use by Query:
Each query will have its own PGA [Program global area] which is basically RAM available to each query. If this this area is not sufficient to return query results then SQL engine starts using Golabl temp tablespace which is like hard disk and query starts becoming slow. If data needed by query is so huge that even temp area is not sufficient then you will tablespace error.
So always design query so that it stays in PGA else its a Red flag.
Inserting one row at a time with single insert statement within loop is slow. The fastest way is to use insert-select like the following, which generates a million rows and bulk insert.
insert into dtr_debtors1(SSN)
select level from dual connect by level <= 1000000;
Try to drop all the index created on your table and then try to insert using the select query. You can try this link which will help you in inserting millions of rows fast into your database.
1) If you want to insert using PL/SQL, then use BULK COLLECT INTO and for insert DML use BULK BIND FOR ALL.
2) In SQL multi insert use INSERT ALL statement.
3) Another method INSERT INTO <tb_nm> SELECT.
4) Use SQL LOADER Utility.

Is there any data dictionary object in oracle to record the transaction details of triggers?

I have created trigger TEST_TRIG as below:
CREATE TRIGGER TEST_TRIG
AFTER INSERT ON TEST_TABLE
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
TEST_PROC();
END;
Procedure TEST_PROC code:
create or replace
PROCEDURE TEST_PROC
AS
BEGIN
EXECUTE IMMEDIATE 'truncate table TEST_FINAL';
INSERT INTO TEST_FINAL select * from TEST_TABLE;
commit;
END;
Initially, I disabled TRIGGER TEST_TRIG and inserted a record into TEST_TABLE and executed PROCEDURE TEST_PROC manually.
Output: I was able to fetch the same record what i inserted into TEST_TABLE from TEST_FINAL.
I flushed those records from both table and enabled the trigger TEST_TRIG.
Now when i inserts and commits the record in TEST_TABLE, I didn't found the record in TEST_FINAL table... I haven't received any error message also!!!
So I want to know whether trigger got fired or not?
I don't think you have fully grasped the implications of AUTONOMOUS_TRANSACTION. Effectively it means the code bounded by the pragma runs in a separate session . So, because of Oracle's read consistent isolation level, the autonomous transaction cannot see any of the data changes generated by the main transaction.
Thus, if TEST_TABLE is empty when you start the trigger will insert no rows into TEST_FINAL, regardless of how many rows you're inserting right now.
So: don't flush both tables. Insert some rows into TEST_TABLE and commit. TEST_FINAL will still be empty. Insert some more rows into TEST_TABLE and, lo! the first set of rows will appear in TEST_FINAL.
Obviously this is not the result you want. So you need to revisit your logic. It really doesn't make sense to truncate TEST_FINAL every time and definitely not FOR EACH ROW. That is Teh Suck! as far as performance goes. Likewise and for the same reason it doesn't make sense to populate the target table with INSERT ... SELECT .
Discarding the TRUNCATE means you don't need the pragma and everything becomes much simpler,
If you want to keep a history of the affected rows use something like this instead:
CREATE TRIGGER TEST_TRIG
AFTER INSERT ON TEST_TABLE
FOR EACH ROW
BEGIN
insert into test_final (col1, col2)
values (:new.col1, :new.col2);
END;
You'll need to change the exact code to fit your exact requirements.

use of FOR UPDATE statement

I am using PL/SQL (Oracle 11g) to update the EMPLOYEES table salary column.
I have used two separate scripts to do the same thing i.e update the salary of employees.
One script uses FOR UPDATE OF statement where as another script doesn't uses it. In both cases I found that oracle holds the row level locks until we execute the ROLLBACK or COMMIT commands.
Then what is the difference in between two scripts?
Which one is better to use?
Here are the two scripts I am talking about:
-- Script 1: Uses FOR UPDATE OF
declare
cursor cur_emp
is
select employee_id,department_id from employees where department_id = 90 for update of salary;
begin
for rec in cur_emp
loop
update Employees
set salary = salary*10
where current of cur_emp;
end loop;
end;
--Script 2: Does the same thing like script 1 but FOR UPDATE OF is not used here
declare
cursor cur_emp
is
select employee_id,department_id from employees where department_id = 90;
begin
for rec in cur_emp
loop
update Employees
set salary = salary*10
where Employee_ID = rec.employee_id;
end loop;
end;
I found that Oracle acquired the row level locks on both cases. So, what is the benefit of using FOR UPDATE OF and Which is the better way of coding?
When you specify FOR UPDATE, the row is locked at the point that you SELECT the data. Without the FOR UPDATE, the row is locked at the point you UPDATE the row. In the second script, another session could potentially lock the row between the time that the SELECT was executed and the point that you tried to UPDATE it.
If you are dealing with a SELECT statement that returns relatively few rows and a tight inner loop, it is unlikely that there will be an appreciable difference between the two. Adding a FOR UPDATE on the SELECT also gives you the opportunity to add a timeout clause if you don't want your script to block indefinitely if some other session happens to have one of the row you're trying to update locked.

Resources