Return REF CURSOR to procedure generated data - oracle

I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this:
FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2
The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row.
By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion
Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?

If you are using Oracle 10gR2 or later then you should check out DML error logging. This basically does what you want to achieve, that is, it allows us to execute all the DML in a batch process by recording any errors and pressing on with the statements.
The principle is that we create an ERROR LOG table for each table we need to work with, using a PL/SQL built-in package DBMS_ERRLOG. Find out more. There is a simple extension to the DML syntax to log messages to the error log table. See an example here. This approach doesn't create any more objects than your proposal, and has the merit of using some standard Oracle functionality.
When working with bulk processing (that is, when using the FORALL syntax) we can trap exceptions using the built-in SQL%BULK_EXCEPTIONS collection. Check it out. It is possible to combine Bulk Exceptions with DML Error Logging but that may create problems in 11g. Find out more.

"Global" in the case of temporary tables just means they are permanent, it's the data which is temporary.
I would define a record type that matches your cursor, plus the status field. Then define a table of that type.
TYPE t_record IS
(
field_1,
...
field_n,
status VARCHAR2(30)
);
TYPE t_table IS TABLE OF t_record;
FUNCTION insert_records
(
p_rows_to_insert IN SYS_REFCURSOR
)
RETURN t_table;
Even better would be to also define the inputs as a table type instead of a cursor.

Related

Problems inserting data into Oracle table with sequence column via SSIS

I am doing data insert into a table in Oracle which is having a sequence set to it in one of the columns say Id column. I would like to know how to do data loads into such tables.
I followed the below link -
It's possible to use OleDbConnections with the Script Component?
and tried to create a function to get the .nextval from the Oracle table but I am getting the following error -
Error while trying to retrieve text for error ORA-01019
I realized that manually setting the value via the package i.e. by using the Script task to enumerate the values but is not incrementing the sequence and that is causing the problem. How do we deal with it? Any links that can help me solve it?
I am using SSIS-2014 but I am not able to tag it as I don't due to paucity of reputation points.
I created a workaround to cater to this problem. I have created staging tables of the destination without the column that takes the Sequence Id. After the data gets inserted, I am then calling SQL statement to get the data into the main tables from staging table and using the .nextval function. Finally truncating/dropping the table depending on the need. It would still be interesting to know how this same thing can be handled via script rather having this workaround.
For instance something like below -
insert into table_main
select table_main_sequence_name.nextval
,*
from (
select *
from table_stg
)
ORA-01019 may be related to fact you have multiple Oracle clients installed. Please check ORACLE_HOME variable if it contains only one client.
One workaround I'm thinking about is creating two procedures for handling sequence. One to get value you start with:
create or replace function get_first from seq as return number
seqid number;
begin
select seq_name.nexval into seqid from dual;
return seqid;
end;
/
Then do your incrementation in script. And after that call second procedure to increment sequence:
create or replace procedure setseq(val number) as
begin
execute immediate 'ALTER SEQUENCE seq_name INCREMENT BY ' || val;
end;
/
This is not good approach but maybe it will solve your problem

Oracle PL/SQL Select all Columns from Trigger's :NEW

I have a trigger that calls a stored procedure when activated, passing :NEW values as a parameter. I have about 40 tables that use the same trigger, and I would like to use the same code for each trigger. Therefore, I am trying to pass all columns of a new row. My code is below and shows what I am attempting to do (however, the problem is that :NEW.* is not a valid expression):
CREATE OR REPLACE TRIGGER "TRIG_TEST_TRIGGER"
AFTER INSERT OR DELETE OR UPDATE ON TRIG_TEST
FOR EACH ROW
DECLARE
BEGIN
MY_STORED_PROC('Trigger Activated: ' || :NEW.*);
END;
Most likely, you can't.
You could write a procedure that uses dynamic SQL to generate the appropriate trigger code for each table. Of course, that would require that you re-run the procedure to re-create the trigger every time the table changes.
I'm a bit hard-pressed, though, to imagine what my_stored_proc might be doing that it would make sense to pass it a string representing every column from 1 of 40 tables with, presumably, 40 different sets of columns. If you're writing to a log table, if you want the data from every column, that generally implies that you want to be able to see the evolution of a particular row over time. But that is extremely hard to do if your log table just has strings in all sorts of different formats from many different tables since you'd constantly have to do things like parsing the string that you logged.

Batch insert: is there a way to just skip on next record when a constraint is violated?

I am using mybatis to perform a massive batch insert on an oracle DB.
My process is very simple: I am taking records from a list of files and inserting them into a specific table after performing some checks on the data.
-Each file contains an average of 180.000 records and I can have more than one file.
-Some records can be present in more than one file.
-A record is identical to another one if EVERY column matches, in other words I cannot simply perform a check on a specific field. And I have defined a constraint in my DB which makes sure this condition is satisfied.
To put it simply I want to just ignore the constraint exception Oracle will give to me in case that constraint is violated.
Record is not present?-->insert
Record is already present?--> go ahead
is this possible with mybatis?Or can I accomplish something at DB level?
I have control on both Application Server and DB so please tell me what's the most efficient way to accomplish this task (even though I'd like to avoid being too much DB dependant...)
of course, I'd like to avoid performing a select* before each insertion...given the number of records I am dealing with it would ruin my application's performances
Use the IGNORE_ROW_ON_DUPKEY_INDEX hint:
insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table_name index_name) */
into table_name
select * ...
I'm not sure about JDBC, but at least in OCI it is possible. With batch operations you pass vectors as bind variables and you also get back vector(s) of returned IDs and also a vector of error codes.
You can also use MERGE on database server side together with custon collection types. Something like:
merge into t
using ( select * from TABLE(:var) v)
on ( v.id = t.id )
when not matched then insert ...
Where :var is bind variable of SQL type: TABLE OF <recordname>
The word "TABLE" is a construct used to cast from bind variable into a table.
Another option is to use SQL error loggin clause:
DBMS_ERRLOG.create_error_log (dml_table_name => 't');
insert into t(...) values(...) log errors reject limit unlimited;
Then after the load you will have to truncate error loging table err$_t;
another option would be to use external tables
It looks that any solution is quite a lot work to do, when compared to using sqlldr.
Ignore error with error table
insert
into table_name
select *
from selected_table
LOG ERRORS INTO SANJI.ERROR_LOG('some comment' )
REJECT LIMIT UNLIMITED;
and error table schema is :
CREATE GLOBAL TEMPORARY TABLE SANJI.ERROR_LOG (
ora_err_number$ number,
ora_err_mesg$ varchar2(2000),
ora_err_rowid$ rowid,
ora_err_optyp$ varchar2(2),
ora_err_tag$ varchar2(2000),
n1 varchar2(128)
)
ON COMMIT PRESERVE ROWS;

Oracle PL/SQL: Calling a procedure from a trigger

I get this error when ever I try to fire a trigger after insert on passengers table. this trigger is supposed to call a procedure that takes two parameters of the newly inserted values and based on that it updates another table which is the booking table. however, i am getting this error:
ORA-04091: table AIRLINESYSTEM.PASSENGER is mutating, trigger/function may not see it
ORA-06512: at "AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 11 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 15 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1", line 3 ORA-04088: error during execution of
trigger 'AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1' (Row 3)
I complied and tested the procedure in the SQL command line and it works fine. The problem seems to be with the trigger. This is the trigger code:
create or replace trigger "CALCULATE_FLIGHT_PRICE_T1"
AFTER
insert on "PASSENGER"
for each row
begin
CALCULATE_FLIGHT_PRICE(:NEW.BOOKING_ID);
end;​​​​​
Why is the trigger isn't calling the procedure?
You are using database triggers in a way they are not supposed to be used. The database trigger tries to read the table it is currently modifying. If Oracle would allow you to do so, you'd be performing dirty reads.
Fortunately, Oracle warns you for your behaviour, and you can modify your design.
The best solution would be to create an API. A procedure, preferably in a package, that allows you to insert passengers in exactly the way you would like it. In pseudo-PL/SQL-code:
procedure insert_passenger
( p_passenger_nr in number
, p_passenger_name in varchar2
, ...
, p_booking_id in number
, p_dob in number
)
is
begin
insert into passenger (...)
values
( p_passenger_nr
, p_passenger_name
, ...
, p_booking_id
, p_dob
);
calculate_flight_price
( p_booking_id
, p_dob
);
end insert_passenger;
/
Instead of your insert statement, you would now call this procedure. And your mutating table problem will disappear.
If you insist on using a database trigger, then you would need to avoid the select statement in cursor c_passengers. This doesn't make any sense: you have just inserted a row into table passengers and know all the column values. Then you call calculate_flight_price to retrieve the column DOB, which you already know.
Just add a parameter P_DOB to your calculate_flight_price procedure and call it with :new.dob, like this:
create or replace trigger calculate_flight_price_t1
after insert on passenger
for each row
begin
calculate_flight_price
( :new.booking_id
, :new.dob
);
end;
Oh my goodness... You are trying a Dirty Read in the cursor. This is a bad design.
If you allow a dirty read, it return the wrong answer, but also it returns an answer that never existed in the table. In a multiuser database, a dirty read can be a dangerous feature.
The point here is that dirty read is not a feature; rather, it's a liability. In Oracle Database, it's just not needed. You get all of the advantages of a dirty read—no blocking—without any of the incorrect results.
Read more on "READ UNCOMMITTED isolation level" which allows dirty reads. It provides a standards-based definition that allows for nonblocking reads.
Other way round
You are misusing the trigger. I mean wrong trigger used.
you insert / update a row in table A and a trigger on table A (for each row) executes a query on table A (through a procedure)??!!!
Oracle throws an ORA-04091 which is an expected and normal behavior, Oracle wants to protect you from yourself since it guarantees that each statement is atomic (i.e will either fail or succeed completely) and also that each statement sees a consistent view of the data
You would expect the query (2) not to see the row inserted on (1). This would be in contradiction
Solution: -- use before instead of after
CREATE OR REPLACE TRIGGER SOMENAME
BEFORE INSERT OR UPDATE ON SOMETABLE

trigger insert and update oracle error

Friend, I have question about cascade trigger.
I have 2 tables, table data that has 3 attributes (id_data, sum, and id_tool), and table tool that has 3 attributes (id_tool, name, sum_total). table data and tool are joined using id_tool.
I want create trigger for update info sum_total. So , if I inserting on table data, sum_total on table tool where tool.id_tool = data.id_tool will updating too.
I create this trigger, but error ora-04090.
create or replace trigger aft_ins_tool
after insert on data
for each row
declare
v_stok number;
v_jum number;
begin
select sum into v_jum
from data
where id_data= :new.id_data;
select sum_total into v_stok
from tool
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
if inserting then
v_stok := v_stok + v_jum;
update tool
set sum_total=v_stok
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
end if;
end;
/
please give me opinion.
Thanks.
The ora-04090 indicates that you already have an AFTER INSERT ... FOR EACH ROW trigger on that table. Oracle doesn't like that, because the order in which the triggers fire is unpredictable, which may lead to unpredictable results, and Oracle really doesn't like those.
So, your first step is to merge the two sets of code into a single trigger. Then the real fun begins.
Presumably there is only one row in data matching the current value of id_data (if not your data model is rally messed up and there's no hope for your situation). Anyway, that means the current row already gives you access to the values of :new.sum and :new.id_tool. So you don't need those queries on the data table: removing those selects will remove the possibility of "mutating table" errors.
As a general observation, maintaining aggregate or summary tables like this is generally a bad idea. Usually it is better just to query the information when it is needed. If you really have huge volumes of data then you should use a materialized view to maintain the summary, rather than hand-rolling something.

Resources