I have an issue that could be solved by implementing full aduting either with triggers or flash data archive but that's much more than is required.
So right now we are performing a merge which is updating a row when its present or inserting when its not. It works well and was easy to write. We now have a new requirement which is that users must know which rows have been updated or inserted. Yes this could be accomplished by introducing another field into the table, but thats not allowed because that would change the table. So we are forced to create one or two tables which will identify which rows are updated or inserted via the PK.
What I am hoping to do is take the existing MERGE statement and add the ability to insert into the secondary table, but I have not been able to find any merge statements which work that way, and INSERT ALL lacks the more complicated conditionals of the merge.
Here is the structure of the MERGE statement thats currently being used.
MERGE INTO EXISTING_TABLE ET USING TMP_TABLE TMP ON (HRR.ID = TMP.ID)
WHEN MATCHED THEN UPDATE SET
ET.ID = TMP.ID,
ET.TITLE_EN = TMP.TITLE_EN,
ET.TITLE_FR = TMP.TITLE_FR,
WHEN NOT MATCHED THEN INSERT (ID, TITLE_EN, TITLE_FR)
VALUES (TMP.ID, TMP.TITLE_EN, TMP.TITLE_FR);
Below is the way Im hoping to accomplish the MERGE INSERT ALL.
MERGE INTO EXISTING_TABLE ET USING TMP_TABLE TMP ON (HRR.ID = TMP.ID)
WHEN MATCHED THEN UPDATE
SET
ET.ID = TMP.ID,
ET.TITLE_EN = TMP.TITLE_EN,
ET.TITLE_FR = TMP.TITLE_FR,
INSERT INTO NEW_TABLE (ID, TYPE) VALUES (TMP.ID, 'U')
WHEN NOT MATCHED THEN INSERT ALL
INTO EXISTING_TABLE (ID, TITLE_EN, TITLE_FR) VALUES (TMP.ID, TMP.TITLE_EN, TMP.TITLE_FR),
INTO NEW_TABLE (ID, TYPE) VALUES (TMP.ID, 'I');
The only other way to reasonably accomplish this that I can see would be with a PLSQL block which works on row statements and would be slower.
At our site, we use after triggers to update audit stuff. It has an advantage of tracking the changes via program, or if someone fat fingers an update statement.
That might do the job for you.
Related
I feel like this is one of those "if you're careful you can do it" scenarios that Oracle just doesn't want to let me do.
My problem is that I have a single configuration table that I want to enable inheritance via Triggers. Think an Employee table with a SUPERVISOR ID column, and 'inherited' SUPERVISOR NAME that self populates if the ID is changed.
I'd like to do a simple self-lookup to capture a value from another row at time of INS/UPD. But Oracle is rejecting as a mutating trigger error.
My code is essentially:
TRIGGER UPD_CHILD_RECORD
BEFORE INSERT OR UPDATE
ON MYSCHEMA.FAKE_EMPLOYEE
FOR EACH ROW
WHEN (NEW.SUPERVISOR_ID IS NOT NULL)
BEGIN
IF INSERTING OR UPDATING
THEN
:NEW.SUPERVISOR_NAME = (
SELECT MAX(NAME)
FROM MYSCHEMA.FAKE_EMPLOYEE
WHERE EMPLOYEE_ID = :NEW.SUPERVISOR_ID
);
END IF;
END UPD_CHILD_RECORD
;
thanks.
This is a normal behavior. Oracle protects you from inconsistent data that you may get accessing a table which is already being updated.
Imagine this scenario.
You submit two update statements and have a trigger that selects from that same table. Let's assume that the first statement is successfully applied and the data gets changed. Now it's time for the second statement. What output would you expect from the select statement in the trigger? Should it return data as it was before the first update, or should it include the changes made? You probably think that Oracle should return the new data. But first, Oracle does not really know your intentions, and second, that would mean that your query is dependent on row order, which contradicts the relational algebra.
The solution for your problem is quite simple. You do not need the SUPERVISOR_NAME column at all. To get supervisor's name, simply join the table with itself and get the desired result, something like:
select t1.ID, t1.SUPERVISOR_ID, t2.NAME from FAKE_EMPLOYEE t1
left join FAKE_EMPLOYEE t2 on t1.SUPERVISOR_ID = t2.ID;
Friend, I have question about cascade trigger.
I have 2 tables, table data that has 3 attributes (id_data, sum, and id_tool), and table tool that has 3 attributes (id_tool, name, sum_total). table data and tool are joined using id_tool.
I want create trigger for update info sum_total. So , if I inserting on table data, sum_total on table tool where tool.id_tool = data.id_tool will updating too.
I create this trigger, but error ora-04090.
create or replace trigger aft_ins_tool
after insert on data
for each row
declare
v_stok number;
v_jum number;
begin
select sum into v_jum
from data
where id_data= :new.id_data;
select sum_total into v_stok
from tool
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
if inserting then
v_stok := v_stok + v_jum;
update tool
set sum_total=v_stok
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
end if;
end;
/
please give me opinion.
Thanks.
The ora-04090 indicates that you already have an AFTER INSERT ... FOR EACH ROW trigger on that table. Oracle doesn't like that, because the order in which the triggers fire is unpredictable, which may lead to unpredictable results, and Oracle really doesn't like those.
So, your first step is to merge the two sets of code into a single trigger. Then the real fun begins.
Presumably there is only one row in data matching the current value of id_data (if not your data model is rally messed up and there's no hope for your situation). Anyway, that means the current row already gives you access to the values of :new.sum and :new.id_tool. So you don't need those queries on the data table: removing those selects will remove the possibility of "mutating table" errors.
As a general observation, maintaining aggregate or summary tables like this is generally a bad idea. Usually it is better just to query the information when it is needed. If you really have huge volumes of data then you should use a materialized view to maintain the summary, rather than hand-rolling something.
I use following statement to insert or update current document versions our customers have signed:
MERGE INTO schema.table ccv
USING (select :customer_id as customer_id, :doc_type as doc_type, :version as Version FROM DUAL) n
ON (ccv.customer_id = n.customer_id and ccv.doc_type = n.doc_type)
WHEN MATCHED
THEN UPDATE
set ccv.version = n.version
DELETE WHERE ccv.version is null
WHEN NOT MATCHED
THEN INSERT
( customer_id, doc_type, version)
VALUES
(:customer_id,:doc_type,:version)
Basically I want to avoid inserting on the same condition when I am deleting with DELETE WHERE statement.
The thing is, that there is three different document types (doc_type), but only one or two might be simulatenously signed.
If a client signed a document, then I want to store it's version, if not, then I do not want a record with that document in database.
So, when the new :version is null I delete the existing row. That works great.
The problem is, when there were no documents of that customer stored, then oracle actually inserts a record with version = NULL.
How can I avoid inserting records when :version is null?
Is splitting the merge to separate delete, update and insert statement the only way to do that?
If you're using 10g, you may use conditions for the insert as well:
http://www.oracle-developer.net/display.php?id=310
Rgds.
I am getting
ORA-30926: unable to get a stable set of rows in the source tables
in the following query:
MERGE INTO table_1 a
USING
(SELECT a.ROWID row_id, 'Y'
FROM table_1 a ,table_2 b ,table_3 c
WHERE a.mbr = c.mbr
AND b.head = c.head
AND b.type_of_action <> '6') src
ON ( a.ROWID = src.row_id )
WHEN MATCHED THEN UPDATE SET in_correct = 'Y';
I've ran table_1 it has data and also I've ran the inside query (src) which also has data.
Why would this error come and how can it be resolved?
This is usually caused by duplicates in the query specified in USING clause. This probably means that TABLE_A is a parent table and the same ROWID is returned several times.
You could quickly solve the problem by using a DISTINCT in your query (in fact, if 'Y' is a constant value you don't even need to put it in the query).
Assuming your query is correct (don't know your tables) you could do something like this:
MERGE INTO table_1 a
USING
(SELECT distinct ta.ROWID row_id
FROM table_1 a ,table_2 b ,table_3 c
WHERE a.mbr = c.mbr
AND b.head = c.head
AND b.type_of_action <> '6') src
ON ( a.ROWID = src.row_id )
WHEN MATCHED THEN UPDATE SET in_correct = 'Y';
You're probably trying to to update the same row of the target table multiple times. I just encountered the very same problem in a merge statement I developed. Make sure your update does not touch the same record more than once in the execution of the merge.
A further clarification to the use of DISTINCT to resolve error ORA-30926 in the general case:
You need to ensure that the set of data specified by the USING() clause has no duplicate values of the join columns, i.e. the columns in the ON() clause.
In OP's example where the USING clause only selects a key, it was sufficient to add DISTINCT to the USING clause. However, in the general case the USING clause may select a combination of key columns to match on and attribute columns to be used in the UPDATE ... SET clause. Therefore in the general case, adding DISTINCT to the USING clause will still allow different update rows for the same keys, in which case you will still get the ORA-30926 error.
This is an elaboration of DCookie's answer and point 3.1 in Tagar's answer, which from my experience may not be immediately obvious.
How to Troubleshoot ORA-30926 Errors? (Doc ID 471956.1)
1) Identify the failing statement
alter session set events ‘30926 trace name errorstack level 3’;
or
alter system set events ‘30926 trace name errorstack off’;
and watch for .trc files in UDUMP when it occurs.
2) Having found the SQL statement, check if it is correct (perhaps using explain plan or tkprof to check the query execution plan) and analyze or compute statistics on the tables concerned if this has not recently been done. Rebuilding (or dropping/recreating) indexes may help too.
3.1) Is the SQL statement a MERGE?
evaluate the data returned by the USING clause to ensure that there are no duplicate values in the join. Modify the merge statement to include a deterministic where clause
3.2) Is this an UPDATE statement via a view?
If so, try populating the view result into a table and try updating the table directly.
3.3) Is there a trigger on the table? Try disabling it to see if it still fails.
3.4) Does the statement contain a non-mergeable view in an 'IN-Subquery'? This can result in duplicate rows being returned if the query has a "FOR UPDATE" clause. See Bug 2681037
3.5) Does the table have unused columns? Dropping these may prevent the error.
4) If modifying the SQL does not cure the error, the issue may be with the table, especially if there are chained rows.
4.1) Run the ‘ANALYZE TABLE VALIDATE STRUCTURE CASCADE’ statement on all tables used in the SQL to see if there are any corruptions in the table or its indexes.
4.2) Check for, and eliminate, any CHAINED or migrated ROWS on the table. There are ways to minimize this, such as the correct setting of PCTFREE.
Use Note 122020.1 - Row Chaining and Migration
4.3) If the table is additionally Index Organized, see:
Note 102932.1 - Monitoring Chained Rows on IOTs
Had the error today on a 12c and none of the existing answers fit (no duplicates, no non-deterministic expressions in the WHERE clause). My case was related to that other possible cause of the error, according to Oracle's message text (emphasis below):
ORA-30926: unable to get a stable set of rows in the source tables
Cause: A stable set of rows could not be got because of large dml activity or a non-deterministic where clause.
The merge was part of a larger batch, and was executed on a live database with many concurrent users. There was no need to change the statement. I just committed the transaction before the merge, then ran the merge separately, and committed again. So the solution was found in the suggested action of the message:
Action: Remove any non-deterministic where clauses and reissue the dml.
SQL Error: ORA-30926: unable to get a stable set of rows in the source tables
30926. 00000 - "unable to get a stable set of rows in the source tables"
*Cause: A stable set of rows could not be got because of large dml
activity or a non-deterministic where clause.
*Action: Remove any non-deterministic where clauses and reissue the dml.
This Error occurred for me because of duplicate records(16K)
I tried with unique it worked .
but again when I tried merge without unique same proble occurred
Second time it was due to commit
after merge if commit is not done same Error will be shown.
Without unique, Query will work if commit is given after each merge operation.
I was not able to resolve this after several hours. Eventually I just did a select with the two tables joined, created an extract and created individual SQL update statements for the 500 rows in the table. Ugly but beats spending hours trying to get a query to work.
As someone explained earlier, probably your MERGE statement tries to update the same row more than once and that does not work (could cause ambiguity).
Here is one simple example. MERGE that tries to mark some products as found when matching the given search patterns:
CREATE TABLE patterns(search_pattern VARCHAR2(20));
INSERT INTO patterns(search_pattern) VALUES('Basic%');
INSERT INTO patterns(search_pattern) VALUES('%thing');
CREATE TABLE products (id NUMBER,name VARCHAR2(20),found NUMBER);
INSERT INTO products(id,name,found) VALUES(1,'Basic instinct',0);
INSERT INTO products(id,name,found) VALUES(2,'Basic thing',0);
INSERT INTO products(id,name,found) VALUES(3,'Super thing',0);
INSERT INTO products(id,name,found) VALUES(4,'Hyper instinct',0);
MERGE INTO products p USING
(
SELECT search_pattern FROM patterns
) o
ON (p.name LIKE o.search_pattern)
WHEN MATCHED THEN UPDATE SET p.found=1;
SELECT * FROM products;
If patterns table contains Basic% and Super% patterns then MERGE works and first three products will be updated (found). But if patterns table contains Basic% and %thing search patterns, then MERGE does NOT work because it will try to update second product twice and this causes the problem. MERGE does not work if some records should be updated more than once. Probably you ask why not update twice!?
Here first update 1 and second update 1 are the same value but only by accident. Now look at this scenario:
CREATE TABLE patterns(code CHAR(1),search_pattern VARCHAR2(20));
INSERT INTO patterns(code,search_pattern) VALUES('B','Basic%');
INSERT INTO patterns(code,search_pattern) VALUES('T','%thing');
CREATE TABLE products (id NUMBER,name VARCHAR2(20),found CHAR(1));
INSERT INTO products(id,name,found) VALUES(1,'Basic instinct',NULL);
INSERT INTO products(id,name,found) VALUES(2,'Basic thing',NULL);
INSERT INTO products(id,name,found) VALUES(3,'Super thing',NULL);
INSERT INTO products(id,name,found) VALUES(4,'Hyper instinct',NULL);
MERGE INTO products p USING
(
SELECT code,search_pattern FROM patterns
) s
ON (p.name LIKE s.search_pattern)
WHEN MATCHED THEN UPDATE SET p.found=s.code;
SELECT * FROM products;
Now first product name matches Basic% pattern and it will be updated with code B but second product matched both patterns and cannot be updated with both codes B and T in the same time (ambiguity)!
That's why DB engine complaints. Don't blame it! It knows what it is doing! ;-)
I've just found the following code:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
I can create a sequence for the PK, but there's a bunch of existing code (classic asp, existing asp.net apps that aren't part of this project) that's not going to use it.
Should I just ignore it, or is there a way to fix it without going into the existing code?
I'm thinking that the best option is just to do:
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
Options?
You can create a trigger on the table that overwrites the value for ID with a value that you fetch from a sequence.
That way you can still use the other existing code and have no problems with concurrent inserts.
If you cannot change the other software and they still do the select max(id)+1 insert that is most unfortunate. What you then can do is:
For your own insert use a sequence and populate the ID field with -1*(sequence value).
This way the insert will not interfere with the existing programs, but also not conflict with the existing programs.
(of do the insert without a value for id and use a trigger to populate the ID with the negative value of a sequence).
As others have said, you can override the max value in a database trigger using a sequence. However, that could cause problems if any of the application code uses that value like this:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
insert into CHILD_TABLE (parent_id, ...)
VALUES (max(id) + 1, ...)
Use a seqeunce in a before insert row trigger. select max(id) + 1 doesn't work in a multi concerrency environment.
This quickly turns in to a discussion of application architecture, especially when the question boils down to "what should I do?"
Primary keys in Oracle really need to come from sequences and since you're dealing with complex insert logic (parent/child inserts, at least) in your application code, you should go into the existing code, as you say (since triggers probably won't help you).
On one extreme you could take away direct SQL access from applications and make them call services so the insert/update/delete code can be centralized. Or you could rewrite your code using some sort of MVC architecture. I'm assuming both are overkill for your situation.
Is the id column at least set to be a true primary key so there's a constraint that will keep duplicates from occurring? If not, start there.
Once the primary key is in place, or if it already is, it's only a matter of time until inserts start to fail; you'll know when they start to fail, right? If not, get on the error-logging.
Now fix the application code. While you're in there, you should at least write and call helper code so your database interactions are in as few places as possible. Then provide some leadership to the other developers and make sure they use the helper code too.
Big question: does anybody rely on the value of the PK? If not I would recommend using a trigger, fetching the id from a sequence and setting it. The inserts wouldn't specify and id at all.
I am not sure but the
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
might cause problems when to sessions reach that code. It might be that oracle reads the table (calculating max(id)) and then trys to get the lock on the PK for insertion. If that is the case two concurrent session might try to use the same id, causing an exception in the second session.
You could add some logging to the trigger, to check if inserts get processed that already have an ID set. So you know you have still to hunt down some place where the old code is used.
It can be done by fetching the max value in a variable and then just insert it in the table like
Declare
v_max int;
select max(id) into v_max from table;
insert into table values((v_max+rownum),val1,val2....,valn);
commit;
This will create a sequence in a single as well as Bulk inserts.