Oracle - FAST REFRESH Materialized views with LEFT JOINS updates very slow - oracle

I have a Materialized view in Oracle that contains a LEFT JOIN which takes a very long time to update. When I update the underlying table it takes 63914.765 s to run (yes that is almost 17 hours).
I am using a LEFT JOIN on the same table, because I want to pivot the data from rows to columns. The pivot command is not available in this Oracle version, and using a GROUP BY + CASE is not allowed on a FAST REFRESH Materialized View.
The Materialized View Log looks like this:
CREATE MATERIALIZED VIEW LOG ON Programmes_Titles
WITH PRIMARY KEY, rowid
INCLUDING NEW Values;
The Materialized View itself looks like this (it contains 700000 rows, the Programmes_Titles table contains 900000 rows):
CREATE MATERIALIZED VIEW Mv_Web_Programmes
REFRESH FAST ON COMMIT
AS
SELECT
t1.ProgrammeId,
t1.Title as MainTitle,
t2.Title as SecondaryTitle,
--Primary key
t1.Title_Id as t1_titleId,
t2.Title_Id as t2_titleId,
t1.rowid as t1_rowid,
t2.rowid as t2_rowid
FROM
Programmes_Titles t1,
Programmes_Titles t2
WHERE
t1.Titles_Group_Type = 'mainTitle'
AND t1.Programme_Id = t2.Programme_Id(+) AND t2.Titles_Group_Type(+) = 'secondaryTitle'
The UPDATE statement I use is this:
UPDATE Programmes_Titles
SET Title = 'New title'
WHERE rowid = 'AAAL4cAAEAAAftTABB'
This UPDATE statement takes 17 hours.
When using an INNER JOIN (remove the (+)'s) it takes milliseconds.
I also tried adding INDEXES on the Mv_Web_Programmes Materialized View, but that did not seem to help either. (It still runs for more than a minute, which is way to slow, I am not waiting 17 hours after every change, so it might improved the UPDATE)
So my question is: Why does is take such a long time to UPDATE the underlying table? How can I improve this?

I've managed to reproduce your problem on a 10.2.0.3 instance. The self- and outer-join seems to be the major problem (although with indexes on every column of the MV it finally did update in under a minute).
At first I thought you could use an aggregate MV:
SQL> CREATE MATERIALIZED VIEW LOG ON Programmes_Titles
2 WITH PRIMARY KEY, ROWID (programmeId, Titles_Group_Type, title)
3 INCLUDING NEW Values;
Materialized view log created
SQL> CREATE MATERIALIZED VIEW Mv_Web_Programmes
2 REFRESH FAST ON COMMIT
3 AS
4 SELECT ProgrammeId,
5 MAX(decode(t1.Titles_Group_Type, 'mainTitle', t1.Title)) MainTl,
6 MAX(decode(t1.Titles_Group_Type, 'secondaryTitle', t1.Title)) SecTl
7 FROM Programmes_Titles t1
8 GROUP BY ProgrammeId;
Materialized view created
Unfortunately, as you have noticed, as of 10g a MV that contains MIN or MAX can only be fast-refreshed on commit after insert (so called insert-only MV). The above solution would not work for update/delete (the MV would have to be refreshed manually).
You could trace your session and open the trace file to see what SQL query gets executed so that you can find if you can optimize it via indexes.

We too faced this issue recently on Oracle 11.2.0.3
In our case, it was unavoidable to to remove an 'OUTER JOIN' due to functional impact.
On investigation, it was found that Oracle was adding a nasty HASH_SH (Hash Semi Join) hint with MV refresh DML.
Nothing worked including things mentioned in following blog-
http://www.adellera.it/blog/2010/03/11/fast-refresh-of-join-only-mvs-_mv_refresh_use_stats-and-locking-log-stats/#comment-2975
In the end, a hidden hint worked...(though in general, it should be avoided by making change in application if possible)
Oracle Doc ID 1949537.1 suggests that setting the hidden _mv_refresh_use_hash_sj parameter to FALSE should prevent it using that hint.
alter session set "_mv_refresh_use_hash_sj"=FALSE;
That stopped CBO using the HASH_SJ hint.
Posting it here in the interests of others.

Related

Oracle: Is materialized view fast refresh atomic?

I have read quite a lot for past few hours about refreshing MV in Oracle, but I cannot still find an answer to my question. Imagine I have a MV view on top of a table with change logs. So that there are three records in this MV:
COL_ID, COL1
1, "OLD"
2, "OLD"
3, "OLD"
Now let's say the value for COL1 has changed to "EDITED" for record 1 in the table used to create MV. I want to perform fast, in-place refresh to update MV as fast as possible. In real life example with around 50M records, this would take around 3 minutes to refresh.
Imagine situation.
Refresh process is still running (there are still records that have not been amended in the MV).
Neverthelss, record with ID = 1 has been already processed so that it has value of "EDITED" in MV.
In another session, a query to MV is executed to get the value of record with ID = 1.
As the result will record with value "OLD" or "EDITED" be given?
I understand that because this is an in-place refresh after this record is processed by refresh mechanism the value in the materialized view will reflect the value in origin table ("EDITED"). But is there any mechanism (like undo logs) that would make the whole refresh atomic? And by this I mean that unless all amendments are done to the materialized views (unless the refresh process is done) if user queries for rows that have been modified in the process of undergoing refresh, he/she will be presented to an old, cached value - before the change.
I presume that this bevahiour is true for out-of-place refresh, but since the former seems to be working way more efficient in terms of time consumption I was curious whether this is also true for the in-place transformation. If not by default, is there any way to force this atomic behaviour?
----- [EDIT]
I run the test following test to see if during the refresh process the results I get from the materialized view will change gradually.
-- create table
create table MV_REFRESH_ATOMICITY_TEST
(
id NUMBER,
value NUMBER
)
-- populate initial data
-- delete from MV_REFRESH_ATOMICITY_TEST
declare
begin
for i in 1..10000000 loop
insert into MV_REFRESH_ATOMICITY_TEST values(i, 0);
end loop;
end;
-- check if equal zero and 1M
select sum(value) from MV_REFRESH_ATOMICITY_TEST
select to_char(count(*),'999,999,999') as COUNT from MV_REFRESH_ATOMICITY_TEST
-- create mv logs on the table
-- drop materialized view log on MV_REFRESH_ATOMICITY_TEST;
create materialized view log on MV_REFRESH_ATOMICITY_TEST with rowid;
-- create mv on top
-- drop materialized view MV_REFRESH_ATOMICITY_TEST_MV
create materialized view MV_REFRESH_ATOMICITY_TEST_MV
refresh fast on demand with rowid
as
select
fact.*,
fact.ROWID "FACT_ROWID"
from
MV_REFRESH_ATOMICITY_TEST fact
-- check if equals zero and 10M
select sum(value) from MV_REFRESH_ATOMICITY_TEST_MV
select to_char(count(*),'999,999,999') as COUNT from MV_REFRESH_ATOMICITY_TEST_MV
-- change value for first million records, 1 milion records in the middle, last milion of records
update MV_REFRESH_ATOMICITY_TEST set value = 1 where id between 1 and 1000000
update MV_REFRESH_ATOMICITY_TEST set value = 1 where id between 5000001 and 6000000
update MV_REFRESH_ATOMICITY_TEST set value = 1 where id between 9000001 and 10000000
-- check if equals 3.000.000
select to_char(sum(value),'999,999,999') as "SUM" from MV_REFRESH_ATOMICITY_TEST
-- check if equals 3.000.000
select to_char(count(*),'999,999,999') from MLOG$_MV_REFRESH_ATOMICITY;
--select * from MLOG$_MV_REFRESH_ATOMICITY;
-- while refreshing mv
-- exec dbms_mview.refresh('MV_REFRESH_ATOMICITY_TEST_MV', 'F');
-- below sum should be equal 0
select
( select sum(value) from MV_REFRESH_ATOMICITY_TEST_MV ) "SUM",
( select count(*) from MV_REFRESH_ATOMICITY_TEST_MV ) "NUMBER OF RECORDS"
from dual
So what I discovered by constantly executing the last select statement was that the only time that the SUM value changed, it already changed by 3M, meaning all the records have been changed in one go - atomically.
Nevertheless, I am not 100% percent sure I can trust this experiment as at some point it took around 40s to execute these select queries. The whole refresh statement took 911s to execute.
[EDIT]
This question has been marked as a possible duplicate of this thread. The other thread does respond to a similar problem, but for a complete-refresh which to my understanding is performed in very different way than fast-refresh which is the case here. Therefore I am not sure whether the same explanation can be applied here.
As far I can see from Oracle documentation (http://docs.oracle.com/cd/B19306_01/server.102/b14226/repmview.htm#i31171) - all refresh is done in atomic way:
A materialized view's data does not necessarily match the current data of its master table or master materialized view at all times. A materialized view is a transactionally (read) consistent reflection of its master as the data existed at a specific point in time (that is, at creation or when a refresh occurs).
Oracle provides even greater read consistency with materialized view groups:
To preserve referential integrity and transactional (read) consistency among multiple materialized views, Oracle has the ability to refresh individual materialized views as part of a refresh group. After refreshing all of the materialized views in a refresh group, the data of all materialized views in the group correspond to the same transactionally consistent point in time.

BatchUpdate using an Oracle view

I have a complex Oracle view which takes around ~ 2 - 3 seconds to execute.
I'm trying to insert values from the Oracle view into a table.
I'm using JdbcTemplate BatchUpdate() to insert multiple values into the table.
In BatchUpdate(), PreparedStatement is used to set values.
Will using a Oracle view, cause any performance issue?
By using PreparedStatement, SQL statements are precompiled. But in case of VIEW, will the view be executed each time the insert query is fired ?
Views are just SQL-statements. They are not slower or faster than the underlying SQL-query.
However, when using complex views (multi-table joins and aggregation) built on-top of other complex views the optimizer may get confused and tries to outsmart itself, leading to really bad execution plans. The problems tend to be even worse if you don't have constraints, referential integrity in place.
A final note is that if you are merely pulling data out of the database to stuff it back in, you would probably achieve better performance doing the entire operation in the database instead. For an example, let's say you pull "order lines" from the database and then updates the "order header" with an "Order Total Qty". In this case you should probably do something like below instead:
merge
into order_header h
using (select order_id, sum(order_qty) as order_total_qty
from order_line
group by order_id
) l
on (h.order_id = l.order_id)
when matched then
update
set h.order_total_qty = l.order_total_qty;

select only new row in oracle

I have table with "varchar2" as primary key.
It has about 1 000 000 Transactions per day.
My app wakes up every 5 minute to generate text file by querying only new record.
It will remember last point and process only new records.
Do you have idea how to query with good performance?
I am able to add new column if necessary.
What do you think this process should do by?
plsql?
java?
Everyone here is really really close. However:
Scott Bailey's wrong about using a bitmap index if the table's under any sort of continuous DML load. That's exactly the wrong time to use a bitmap index.
Everyone else's answer about the PROCESSED CHAR(1) check in ('Y','N')column is right, but missing how to index it; you should use a function-based index like this:
CREATE INDEX MY_UNPROCESSED_ROWS_IDX ON MY_TABLE
(CASE WHEN PROCESSED_FLAG = 'N' THEN 'N' ELSE NULL END);
You'd then query it using the same expression:
SELECT * FROM MY_TABLE
WHERE (CASE WHEN PROCESSED_FLAG = 'N' THEN 'N' ELSE NULL END) = 'N';
The reason to use the function-based index is that Oracle doesn't write index entries for entirely NULL values being indexed, so the function-based index above will only contain the rows with PROCESSED_FLAG = 'N'. As you update your rows to PROCESSED_FLAG = 'Y', they'll "fall out" of the index.
Well, if you can add a new column, you could create a Processed column, which will indicate processed records, and create an index on this column for performance.
Then the query should only be for those rows that have been newly added, and not processed.
This should be easily done using sql queries.
Ah, I really hate to add another answer when the others have come so close to nailing it. But
As Ponies points out, Oracle does have a hidden column (ORA_ROWSCN - System Change Number) that can pinpoint when each row was modified. Unfortunately, the default is that it gets the information from the block instead of storing it with each row and changing that behavior will require you to rebuild a really large table. So while this answer is good for quieting the SQL Server fella, I'd not recommend it.
Astander is right there but needs a few caveats. Add a new column needs_processed CHAR(1) DEFAULT 'Y' and add a BITMAP index. For low cardinality columns ('Y'/'N') the bitmap index will be faster. Once you have the rest is pretty easy. But you've got to be careful not select the new rows, process them and mark them as processed in one step. Otherwise, rows could be inserted while you are processing that will get marked processed even though they have not been.
The easiest way would be to use pl/sql to open a cursor that selects unprocessed rows, processes them and then updates the row as processed. If you have an aversion to walking cursors, you could collect the pk's or rowids into a nested table, process them and then update using the nested table.
In MS SQL Server world where I work, we have a 'version' column of type 'timestamp' on our tables.
So, to answer #1, I would add a new column.
To answer #2, I would do it in plsql for performance.
Mark
"astander" pretty much did the work for you. You need to ALTER your table to add one more column (lets say PROCESSED)..
You can also consider creating an INDEX on the PROCESSED ( a bitmap index may be of some advantage, as the possible value can be only 'y' and 'n', but test it out ) so that when you query it will use INDEX.
Also if sure, you query only for every 5 mins, check whether you can add another column with TIMESTAMP type and partition the table with it. ( not sure, check out again ).
I would also think about writing job or some thing and write using UTL_FILE and show it front end if it can be.
If performance is really a problem and you want to create your file asynchronously, you might want to use Oracle Streams, which will actually get modification data from your redo log withou affecting performance of the main database. You may not even need a separate job, as you can configure Oracle Streams to do Asynchronous replication of the changes, through which you can trigger the file creation.
Why not create an extra table that holds two columns. The ID column and a processed flag column. Have an insert trigger on the original table place it's ID in this new table. Your logging process can than select records from this new table and mark them as processed. Finally delete the processed records from this table.
I'm pretty much in agreement with Adam's answer. But I'd want to do some serious testing compared to an alternative.
The issue I see is that you need to not only select the rows, but also do an update of those rows. While that should be pretty fast, I'd like to avoid the update. And avoid having any large transactions hanging around (see below).
The alternative would be to add CREATE_DATE date default sysdate. Index that. And then select records where create_date >= (start date/time of your previous select).
But I don't have enough data on the relative costs of setting a sysdate as default vs. setting a value of Y, updating the function based vs. date index, and doing a range select on the date vs. a specific select on a single value for the Y. You'll probably want to preserve stats or hint the query to use the index on the Y/N column, and definitely want to use a hint on a date column -- the stats on the date column will almost certainly be old.
If data are also being added to the table continuously, including during the period when your query is running, you need to watch out for transaction control. After all, you don't want to read 100,000 records that have the flag = Y, then do your update on 120,000, including the 20,000 that arrived when you query was running.
In the flag case, there are two easy ways: SET TRANSACTION before your select and commit after your update, or start by doing an update from Y to Q, then do your select for those that are Q, and then update to N. Oracle's read consistency is wonderful but needs to be handled with care.
For the date column version, if you don't mind a risk of processing a few rows more than once, just update your table that has the last processed date/time immediately before you do your select.
If there's not much information in the table, consider making it Index Organized.
What about using Materialized view logs? You have a lot of options to play with:
SQL> create table test (id_test number primary key, dummy varchar2(1000));
Table created
SQL> create materialized view log on test;
Materialized view log created
SQL> insert into test values (1, 'hello');
1 row inserted
SQL> insert into test values (2, 'bye');
1 row inserted
SQL> select * from mlog$_test;
ID_TEST SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$
---------- ----------- --------- --------- ---------------------
1 01/01/4000 I N FE
2 01/01/4000 I N FE
SQL> delete from mlog$_test where id_test in (1,2);
2 rows deleted
SQL> insert into test values (3, 'hello');
1 row inserted
SQL> insert into test values (4, 'bye');
1 row inserted
SQL> select * from mlog$_test;
ID_TEST SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$
---------- ----------- --------- --------- ---------------
3 01/01/4000 I N FE
4 01/01/4000 I N FE
I think this solution should work..
What you need to do following steps
For the first run, you will have to copy all records. In first run you need to execute following query
insert into new_table(max_rowid) as (Select max(rowid) from yourtable);
Now next time when you want to get only newly inserted values, you can do it by executing follwing command
Select * from yourtable where rowid > (select max_rowid from new_table);
Once you are done with processing above query, simply truncate new_table and insert max(rowid) from yourtable
I think this should work and would be fastest solution;

Need index with sys_connect_by_path function? How to emulate it?

I have a self referencing table in Oracle 9i, and a view that gets data from it:
CREATE OR REPLACE VIEW config AS
SELECT c.node_id,
c.parent_node_id,
c.config_key,
c.config_value,
(SELECT c2.config_key
FROM vera.config_tab c2
WHERE c2.node_id = c.parent_node_id) AS parent_config_key,
sys_connect_by_path(config_key, '.') path,
sys_connect_by_path(config_key, '->') php_notation
FROM config_tab c
CONNECT BY c.parent_node_id = PRIOR c.node_id
START WITH c.parent_node_id IS NULL
ORDER BY LEVEL DESC
The table stores configuration for PHP application. Now I need to use same config in oracle view.
I would like to select some values from the view by path, but unfortunately this takes 0,15s so it's unacceptable cost.
SELECT * FROM some_table
WHERE some_column IN (
SELECT config_value FROM config_tab WHERE path = 'a.path.to.config'
)
At first I thought of a function index on sys_connect_by_path, but it is impossible, as it needs also CONNECT BY clause.
Any suggestions how can I emulate an index on the path column from the 'config' view?
If your data doesn't change frequently in the config_tab, you could use a materialized view with the same query as your view. You could then index the path column of your materialized view.
CREATE MATERIALIZED VIEW config
REFRESH COMPLETE ON DEMAND
AS <your_query>;
CREATE INDEX ix_config_path ON config (path);
Since this is a complex query, you would need to do a full refresh of your materialized view every time the base table is updated so that the data in the MV doesn't become stale.
Update
Your column path will be defined as a VARCHAR2(4000). You could limit the size of this column in order to index it. In your query, replace sys_connect_by_path(...) by SUBSTR(sys_connect_by_path(..., 1, 1000) for example.
You won't be able to use REFRESH ON COMMIT on a complex MV. A simple trigger won't work. You will have to modify the code that updates your base table to include a refresh somehow, I don't know if this is practical in your environment.
You could also use a trigger that submits a job that will refresh the MV. The job will execute once you commit (this is a feature of dbms_job). This is more complex since you will have to check that you only trigger the job once per transaction (using a package variable for example). Again, this is only practical if you don't update the base table frequently.

ORA-30926: unable to get a stable set of rows in the source tables

I am getting
ORA-30926: unable to get a stable set of rows in the source tables
in the following query:
MERGE INTO table_1 a
USING
(SELECT a.ROWID row_id, 'Y'
FROM table_1 a ,table_2 b ,table_3 c
WHERE a.mbr = c.mbr
AND b.head = c.head
AND b.type_of_action <> '6') src
ON ( a.ROWID = src.row_id )
WHEN MATCHED THEN UPDATE SET in_correct = 'Y';
I've ran table_1 it has data and also I've ran the inside query (src) which also has data.
Why would this error come and how can it be resolved?
This is usually caused by duplicates in the query specified in USING clause. This probably means that TABLE_A is a parent table and the same ROWID is returned several times.
You could quickly solve the problem by using a DISTINCT in your query (in fact, if 'Y' is a constant value you don't even need to put it in the query).
Assuming your query is correct (don't know your tables) you could do something like this:
MERGE INTO table_1 a
USING
(SELECT distinct ta.ROWID row_id
FROM table_1 a ,table_2 b ,table_3 c
WHERE a.mbr = c.mbr
AND b.head = c.head
AND b.type_of_action <> '6') src
ON ( a.ROWID = src.row_id )
WHEN MATCHED THEN UPDATE SET in_correct = 'Y';
You're probably trying to to update the same row of the target table multiple times. I just encountered the very same problem in a merge statement I developed. Make sure your update does not touch the same record more than once in the execution of the merge.
A further clarification to the use of DISTINCT to resolve error ORA-30926 in the general case:
You need to ensure that the set of data specified by the USING() clause has no duplicate values of the join columns, i.e. the columns in the ON() clause.
In OP's example where the USING clause only selects a key, it was sufficient to add DISTINCT to the USING clause. However, in the general case the USING clause may select a combination of key columns to match on and attribute columns to be used in the UPDATE ... SET clause. Therefore in the general case, adding DISTINCT to the USING clause will still allow different update rows for the same keys, in which case you will still get the ORA-30926 error.
This is an elaboration of DCookie's answer and point 3.1 in Tagar's answer, which from my experience may not be immediately obvious.
How to Troubleshoot ORA-30926 Errors? (Doc ID 471956.1)
1) Identify the failing statement
alter session set events ‘30926 trace name errorstack level 3’;
or
alter system set events ‘30926 trace name errorstack off’;
and watch for .trc files in UDUMP when it occurs.
2) Having found the SQL statement, check if it is correct (perhaps using explain plan or tkprof to check the query execution plan) and analyze or compute statistics on the tables concerned if this has not recently been done. Rebuilding (or dropping/recreating) indexes may help too.
3.1) Is the SQL statement a MERGE?
evaluate the data returned by the USING clause to ensure that there are no duplicate values in the join. Modify the merge statement to include a deterministic where clause
3.2) Is this an UPDATE statement via a view?
If so, try populating the view result into a table and try updating the table directly.
3.3) Is there a trigger on the table? Try disabling it to see if it still fails.
3.4) Does the statement contain a non-mergeable view in an 'IN-Subquery'? This can result in duplicate rows being returned if the query has a "FOR UPDATE" clause. See Bug 2681037
3.5) Does the table have unused columns? Dropping these may prevent the error.
4) If modifying the SQL does not cure the error, the issue may be with the table, especially if there are chained rows.
4.1) Run the ‘ANALYZE TABLE VALIDATE STRUCTURE CASCADE’ statement on all tables used in the SQL to see if there are any corruptions in the table or its indexes.
4.2) Check for, and eliminate, any CHAINED or migrated ROWS on the table. There are ways to minimize this, such as the correct setting of PCTFREE.
Use Note 122020.1 - Row Chaining and Migration
4.3) If the table is additionally Index Organized, see:
Note 102932.1 - Monitoring Chained Rows on IOTs
Had the error today on a 12c and none of the existing answers fit (no duplicates, no non-deterministic expressions in the WHERE clause). My case was related to that other possible cause of the error, according to Oracle's message text (emphasis below):
ORA-30926: unable to get a stable set of rows in the source tables
Cause: A stable set of rows could not be got because of large dml activity or a non-deterministic where clause.
The merge was part of a larger batch, and was executed on a live database with many concurrent users. There was no need to change the statement. I just committed the transaction before the merge, then ran the merge separately, and committed again. So the solution was found in the suggested action of the message:
Action: Remove any non-deterministic where clauses and reissue the dml.
SQL Error: ORA-30926: unable to get a stable set of rows in the source tables
30926. 00000 - "unable to get a stable set of rows in the source tables"
*Cause: A stable set of rows could not be got because of large dml
activity or a non-deterministic where clause.
*Action: Remove any non-deterministic where clauses and reissue the dml.
This Error occurred for me because of duplicate records(16K)
I tried with unique it worked .
but again when I tried merge without unique same proble occurred
Second time it was due to commit
after merge if commit is not done same Error will be shown.
Without unique, Query will work if commit is given after each merge operation.
I was not able to resolve this after several hours. Eventually I just did a select with the two tables joined, created an extract and created individual SQL update statements for the 500 rows in the table. Ugly but beats spending hours trying to get a query to work.
As someone explained earlier, probably your MERGE statement tries to update the same row more than once and that does not work (could cause ambiguity).
Here is one simple example. MERGE that tries to mark some products as found when matching the given search patterns:
CREATE TABLE patterns(search_pattern VARCHAR2(20));
INSERT INTO patterns(search_pattern) VALUES('Basic%');
INSERT INTO patterns(search_pattern) VALUES('%thing');
CREATE TABLE products (id NUMBER,name VARCHAR2(20),found NUMBER);
INSERT INTO products(id,name,found) VALUES(1,'Basic instinct',0);
INSERT INTO products(id,name,found) VALUES(2,'Basic thing',0);
INSERT INTO products(id,name,found) VALUES(3,'Super thing',0);
INSERT INTO products(id,name,found) VALUES(4,'Hyper instinct',0);
MERGE INTO products p USING
(
SELECT search_pattern FROM patterns
) o
ON (p.name LIKE o.search_pattern)
WHEN MATCHED THEN UPDATE SET p.found=1;
SELECT * FROM products;
If patterns table contains Basic% and Super% patterns then MERGE works and first three products will be updated (found). But if patterns table contains Basic% and %thing search patterns, then MERGE does NOT work because it will try to update second product twice and this causes the problem. MERGE does not work if some records should be updated more than once. Probably you ask why not update twice!?
Here first update 1 and second update 1 are the same value but only by accident. Now look at this scenario:
CREATE TABLE patterns(code CHAR(1),search_pattern VARCHAR2(20));
INSERT INTO patterns(code,search_pattern) VALUES('B','Basic%');
INSERT INTO patterns(code,search_pattern) VALUES('T','%thing');
CREATE TABLE products (id NUMBER,name VARCHAR2(20),found CHAR(1));
INSERT INTO products(id,name,found) VALUES(1,'Basic instinct',NULL);
INSERT INTO products(id,name,found) VALUES(2,'Basic thing',NULL);
INSERT INTO products(id,name,found) VALUES(3,'Super thing',NULL);
INSERT INTO products(id,name,found) VALUES(4,'Hyper instinct',NULL);
MERGE INTO products p USING
(
SELECT code,search_pattern FROM patterns
) s
ON (p.name LIKE s.search_pattern)
WHEN MATCHED THEN UPDATE SET p.found=s.code;
SELECT * FROM products;
Now first product name matches Basic% pattern and it will be updated with code B but second product matched both patterns and cannot be updated with both codes B and T in the same time (ambiguity)!
That's why DB engine complaints. Don't blame it! It knows what it is doing! ;-)

Resources