How can i ignore dupkey with sqlplus? - oracle

I want to write a sql plus error for when the oracle find a record, or more records, that already exist and just ignore it/them. This is a example:
sqlError=`egrep "ORA-[0-9][0-9][0-9][0-9][0-9]" ${FILE_SPOOL_DAT} | awk '{print $0}';`
if test ! -f ${FILE_SPOOL_DAT}
then
echo "Error request " >> ${FILE_SPOOL_DAT}
else
if [ ! "$sqlError" = "" ] #controls if the variable $sqlError contains a value different from spaces, i think this is the point to change
then
echo "Error $sqlError" >> ${FILE_SPOOL_DAT}
fi
fi
In this example sqlplus controls if the variable $sqlError contains a value different from spaces. How can i change this condition put the DUPKEY error? Thanks

If you're using 11g, the IGNORE_ROW_ON_DUPKEY_INDEX hint can help.
SQL> create table table1(a number primary key);
Table created.
SQL> insert into table1 values(1);
1 row created.
SQL> insert into table1 values(1);
insert into table1 values(1)
*
ERROR at line 1:
ORA-00001: unique constraint (JHELLER.SYS_C00810741) violated
SQL> insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table1(a))*/ into table1 values(1);
0 rows created.

Yikes, that's dangerous! If Oracle finds one or more records that already exists, it will normally rollback the transaction. Suppressing the relevant error messages is way to late.
A much better place is to instruct Oracle to ignore duplicate records directly during the INSERT, for instance by using the MERGE command:
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9016.htm#i2081218

When you get a duplicate key error it really means you have violated a constraint set on a table.
Assuming whoever designed the table understood the data model, what you want to do is a BAD idea.
If it is a bad design, change the table's metadata by removing the constraint.
The MERGE command may let you work around it, which I doubt, but with the subsequent data mess left behind I do not believe it is worth the risk.
If you are just playing, remove the constraint anyway. I don't know how your table is set up,
but you will probably have to ALTER or DROP and CREATE an index, or ALTER the table.
If this is development for production and thus paid work, don't try to get around the constraint without talking to the person(s) who designed the table to start with.

Related

Constraint stops DELETE in my own procedure, yet same DELETE runs in SQLPlus

On running two DELETE statements inside my own Package.Procedure:
DELETE FROM ENT_PLANT_RELATIVE WHERE CHILD_ID = plant_id;
DELETE FROM ENT_PLANT_ITEM WHERE PLANT_ID = plant_id;
... I receive an integrity constraint violation on the second DELETE (first works correctly):
SQL> call P.DeletePlantItem(112808,1,:err);
ORA-02292: integrity constraint
(XXXXX.EAITA_PLANT_ITEM_ID_FK) violated - child record found
There are no child records containing the FK in the table containing the constraint of that name:
SQL> select count(plant_item_id) from ent_application_item_to_access
where plant_item_id = 112808;
COUNT(PLANT_ITEM_ID)
--------------------
0
... and there is only one column in that table using PLANT_ITEM_ID.
On running the suspect DELETE statement directly (copied from the procedure) in SQL*Plus (from which I call the procedure), it works as expected.
I have:
disabled the first integrity constraint shown, only to have another similar constraint appear on another table, similarly not containing the FK
disabled the second similar integrity constraint shown...ditto
inserted a single record into the master table and the one child table I am trying to delete from, and tried again with identical results to previously existing records
manually inserted a single item into ENT_PLANT_ITEM, commented out the unnecessary DELETE in the procedure, and tried to call it: same results
tried a COMMIT immediately after the first (successful) deletion from the child table (ENT_PLANT_RELATIVE)
used put_line to obtain the SQLERRM contains shown above identifying the constraint
checked for the existence of triggers inserting records (unnecessarily, I feel, as no child data exists in the table showing the constraint issue)
used IF to logically break the procedure into two separate procedures handling the two DELETE statements separately: same results
checked privileges as much as I know how, although I assume that because I can DELETE from one context (SQLPlus), and create the Package and Procedure in the first place, I should be able to from the Procedure (call)
also assumed that SQLERRM would report a privilege error
read the on-line Oracle documentation on constraints
read as many Stack Overflow questions as I could find that refer to constraints and referential integrity
The package definition is:
CREATE OR REPLACE PACKAGE P AS
PROCEDURE InsertPlantItem(plant_desc IN VARCHAR2, parent_desc IN VARCHAR2, test IN NUMBER, err OUT VARCHAR2);
PROCEDURE DeletePlantItem(plant_id IN NUMBER, test IN NUMBER, err OUT VARCHAR2);
END P;
In summary, why can I delete row x in table y from the SQL*Plus command line but not via a stored procedure I created because of a constraint in a table I don't have any keys in?
The problem was indeed in the second DELETE statement:
DELETE FROM ENT_PLANT_ITEM WHERE PLANT_ID = plant_id;
I had created an IN parameter called plant_id, passing in a 6-digit number not realising I was effectively asking the procedure to delete all items in the database where a constraint wasn't going to stop it. It's rather more obvious in all capitals:
DELETE FROM ENT_PLANT_ITEM WHERE PLANT_ID = PLANT_ID;
A column name obviously has higher precedence than a parameter.

Trigger cant read the table, after being fired by the same table

Lets say I have a table as follows--
create table employees
(
eno number(4) not null primary key,
ename varchar2(30),
zip number(5) references zipcodes,
hdate date
);
I've created a trigger using the following code block
create or replace TRIGGER COPY_LAST_ONO
AFTER INSERT ON ORDERS
FOR EACH ROW
DECLARE
ID_FROM_ORDER_TABLE VARCHAR2(10);
BEGIN
SELECT MAX(ORDERS.ONO)INTO ID_FROM_ORDER_TABLE from ORDERS ;
DBMS_OUTPUT.PUT_LINE(ID_FROM_ORDER_TABLE);
INSERT INTO BACKUP_ONO VALUES( VALUE1, VALUE2,VALUE3, ID_FROM_ORDER_TABLE);
END;
The trigger fires after insertion and attempts to read from the table that fired it(logically duhh!) but oracle is giving me an error and asking me to modify the trigger so that it doesnt read the table. Error code-
Error report -
SQL Error: ORA-04091: table TEST1.ORDERS is mutating, trigger/function may not see it
ORA-06512: at "TEST1.COPY_LAST_ONO", line 8
ORA-04088: error during execution of trigger 'TEST1.LOG_INSERT'
04091. 00000 - "table %s.%s is mutating, trigger/function may not see it"
*Cause: A trigger (or a user defined plsql function that is referenced in
this statement) attempted to look at (or modify) a table that was
in the middle of being modified by the statement which fired it.
*Action: Rewrite the trigger (or function) so it does not read that table.
What I'm trying to achieve with this trigger is to copy the last INSERTED ONO (which is a primary key for the ORDER table) immediately to a different table after being INSERTED. What I don't get is, why oracle complaining? The trigger is attempting to read AFTER the insertion!
Ideas? Solution?
MANY THANKS
If you are trying to log the ONO you just inserted, use :new.ono and skip the select altogether:
INSERT INTO BACKUP_ONO VALUES( VALUE1, VALUE2,VALUE3, :new.ono);
I don't believe you can select from the table you are in the middle of inserting into as the commit has not been issued yet, hence the mutating table error.
P.S. Consider not abbreviating. Make it clear for the next developer and call it ORDER_NUMBER or at least a generally accepted abbreviation like ORDER_NBR, whatever your company's naming standards are. :-)
FYI - If you are updating, you can access :OLD.column as well, the value before the update (of course if the column is not a primary key column).
Amplifying #Gary_W's answer:
Oracle does not allow a row trigger (one with FOR EACH ROW in it) to access the table on which the trigger is defined in any way - you can't issue a SELECT, INSERT, UPDATE, or DELETE against that table from within the trigger or anything it calls (so, no, you can't dodge around this by calling a stored procedure which does the dirty work for you - but good thinking! :-). My understanding is that this is done to prevent what you might call a "trigger loop" - that is, the triggering condition is satisfied and the trigger's PL/SQL block is executed; that block then does something which causes the trigger to be fired again; the trigger's PL/SQL block is invoked; the trigger's code modifies another row; etc, ad infinitum. Generally, this should be taken as a warning that your logic is either really ugly, or you're implementing it in the wrong place. (See here for info on the evil of business logic in triggers). If you find that you really seriously need to do this (I've worked with Oracle and other databases for years - I've really had to do it once - and may Cthulhu have mercy upon my soul :-) you can use a compound trigger which allows you to work around these issues - but seriously, if you're in a hole like this your best option is to re-work the data so you don't have to do this.
Best of luck.
Modify your trigger to use PRAGMA AUTONOMOUS_TRANSACTION
create or replace TRIGGER COPY_LAST_ONO
AFTER INSERT ON ORDERS
FOR EACH ROW
DECLARE
ID_FROM_ORDER_TABLE VARCHAR2(10);
PRAGMA AUTONOMOUS_TRANSACTION; -- Modification
BEGIN
.
.
.

Batch insert: is there a way to just skip on next record when a constraint is violated?

I am using mybatis to perform a massive batch insert on an oracle DB.
My process is very simple: I am taking records from a list of files and inserting them into a specific table after performing some checks on the data.
-Each file contains an average of 180.000 records and I can have more than one file.
-Some records can be present in more than one file.
-A record is identical to another one if EVERY column matches, in other words I cannot simply perform a check on a specific field. And I have defined a constraint in my DB which makes sure this condition is satisfied.
To put it simply I want to just ignore the constraint exception Oracle will give to me in case that constraint is violated.
Record is not present?-->insert
Record is already present?--> go ahead
is this possible with mybatis?Or can I accomplish something at DB level?
I have control on both Application Server and DB so please tell me what's the most efficient way to accomplish this task (even though I'd like to avoid being too much DB dependant...)
of course, I'd like to avoid performing a select* before each insertion...given the number of records I am dealing with it would ruin my application's performances
Use the IGNORE_ROW_ON_DUPKEY_INDEX hint:
insert /*+ IGNORE_ROW_ON_DUPKEY_INDEX(table_name index_name) */
into table_name
select * ...
I'm not sure about JDBC, but at least in OCI it is possible. With batch operations you pass vectors as bind variables and you also get back vector(s) of returned IDs and also a vector of error codes.
You can also use MERGE on database server side together with custon collection types. Something like:
merge into t
using ( select * from TABLE(:var) v)
on ( v.id = t.id )
when not matched then insert ...
Where :var is bind variable of SQL type: TABLE OF <recordname>
The word "TABLE" is a construct used to cast from bind variable into a table.
Another option is to use SQL error loggin clause:
DBMS_ERRLOG.create_error_log (dml_table_name => 't');
insert into t(...) values(...) log errors reject limit unlimited;
Then after the load you will have to truncate error loging table err$_t;
another option would be to use external tables
It looks that any solution is quite a lot work to do, when compared to using sqlldr.
Ignore error with error table
insert
into table_name
select *
from selected_table
LOG ERRORS INTO SANJI.ERROR_LOG('some comment' )
REJECT LIMIT UNLIMITED;
and error table schema is :
CREATE GLOBAL TEMPORARY TABLE SANJI.ERROR_LOG (
ora_err_number$ number,
ora_err_mesg$ varchar2(2000),
ora_err_rowid$ rowid,
ora_err_optyp$ varchar2(2),
ora_err_tag$ varchar2(2000),
n1 varchar2(128)
)
ON COMMIT PRESERVE ROWS;

Oracle PL/SQL: Calling a procedure from a trigger

I get this error when ever I try to fire a trigger after insert on passengers table. this trigger is supposed to call a procedure that takes two parameters of the newly inserted values and based on that it updates another table which is the booking table. however, i am getting this error:
ORA-04091: table AIRLINESYSTEM.PASSENGER is mutating, trigger/function may not see it
ORA-06512: at "AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 11 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 15 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1", line 3 ORA-04088: error during execution of
trigger 'AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1' (Row 3)
I complied and tested the procedure in the SQL command line and it works fine. The problem seems to be with the trigger. This is the trigger code:
create or replace trigger "CALCULATE_FLIGHT_PRICE_T1"
AFTER
insert on "PASSENGER"
for each row
begin
CALCULATE_FLIGHT_PRICE(:NEW.BOOKING_ID);
end;​​​​​
Why is the trigger isn't calling the procedure?
You are using database triggers in a way they are not supposed to be used. The database trigger tries to read the table it is currently modifying. If Oracle would allow you to do so, you'd be performing dirty reads.
Fortunately, Oracle warns you for your behaviour, and you can modify your design.
The best solution would be to create an API. A procedure, preferably in a package, that allows you to insert passengers in exactly the way you would like it. In pseudo-PL/SQL-code:
procedure insert_passenger
( p_passenger_nr in number
, p_passenger_name in varchar2
, ...
, p_booking_id in number
, p_dob in number
)
is
begin
insert into passenger (...)
values
( p_passenger_nr
, p_passenger_name
, ...
, p_booking_id
, p_dob
);
calculate_flight_price
( p_booking_id
, p_dob
);
end insert_passenger;
/
Instead of your insert statement, you would now call this procedure. And your mutating table problem will disappear.
If you insist on using a database trigger, then you would need to avoid the select statement in cursor c_passengers. This doesn't make any sense: you have just inserted a row into table passengers and know all the column values. Then you call calculate_flight_price to retrieve the column DOB, which you already know.
Just add a parameter P_DOB to your calculate_flight_price procedure and call it with :new.dob, like this:
create or replace trigger calculate_flight_price_t1
after insert on passenger
for each row
begin
calculate_flight_price
( :new.booking_id
, :new.dob
);
end;
Oh my goodness... You are trying a Dirty Read in the cursor. This is a bad design.
If you allow a dirty read, it return the wrong answer, but also it returns an answer that never existed in the table. In a multiuser database, a dirty read can be a dangerous feature.
The point here is that dirty read is not a feature; rather, it's a liability. In Oracle Database, it's just not needed. You get all of the advantages of a dirty read—no blocking—without any of the incorrect results.
Read more on "READ UNCOMMITTED isolation level" which allows dirty reads. It provides a standards-based definition that allows for nonblocking reads.
Other way round
You are misusing the trigger. I mean wrong trigger used.
you insert / update a row in table A and a trigger on table A (for each row) executes a query on table A (through a procedure)??!!!
Oracle throws an ORA-04091 which is an expected and normal behavior, Oracle wants to protect you from yourself since it guarantees that each statement is atomic (i.e will either fail or succeed completely) and also that each statement sees a consistent view of the data
You would expect the query (2) not to see the row inserted on (1). This would be in contradiction
Solution: -- use before instead of after
CREATE OR REPLACE TRIGGER SOMENAME
BEFORE INSERT OR UPDATE ON SOMETABLE

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources