Transactional level for subqueries - oracle

I want to remove duplicate rows from the table and found this solution at SO (Removing duplicate rows from table in Oracle)
DELETE FROM your_table -- step 2
WHERE rowid not in
(SELECT MIN(rowid) -- step 1
FROM your_table
GROUP BY column1, column2, column3);
What will happen if there are rows inserted after Step 1 but before Step 2, will they be deleted as well? If yes, what transactional level should I use to avoid it?

From the concepts guide:
In the read committed isolation level, which is the default, every
query executed by a transaction sees only data committed before the
query—not the transaction—began.
...
A query in a read committed transaction avoids reading data that
commits while the query is in progress.
...
A consistent result set is provided for every query, guaranteeing data
consistency, with no action by the user. An implicit query, such as a
query implied by a WHERE clause in an UPDATE statement, is
guaranteed a consistent set of results. However, each statement in an
implicit query does not see the changes made by the DML statement
itself, but sees the data as it existed before changes were made.
The last paragraph applies to your case as well, just for DELETE rather than UPDATE. Your statement - the delete itself and the subquery are a single statement - is isolated so it won't be affected by any changes made in any other sessions or transactions. You won't 'see' any rows inserted elsewhere while your statement is executed, whether they are commited or not.
So you don't need to change from the default isolation level.

Related

MERGE INTO Performance as table grows

This is a general question about the Oracle MERGE INTO statement with a particular scenario, on Oracle RDBMS 12c.
Daily data will be loaded to StagingTableA - about 10m rows.
This will be MERGEd INTO TableA.
TableA will vary between 0 to 10m rows (matcing StagingTableA).
There may be times when TableA will be pruned/emptied and left with 0 rows.
Clearly, when TableA is empty, a straight INSERT will do the job, but the procedure has been written to use a MERGE INTO method to handle all scenarios.
The MERGE .. MATCH is on a indexed column.
My question is an uncertainty about how the MERGE handles the MATCH in circumstances where TableA will start empty, and then grow hugely during the MERGE execution. The MATCH on indexed columns will use a FTS as the stats will show the table has 0 rows.
At some point during the MERGE transaction, this will become inefficient.
Is the MERGE statement clever enough to detect this and change the execution plan, and start using the index instead of the FTS?
If this was done the old way with CURSOR, UPDATE and INSERT then we could potentially introduce a ANALYZE at a appropriate point (say after 50,000 processed) on the TableA to switch to a optimal plan.
I haven't been able to find any documentation dealing with this specific question.
Hopefully you've got a UNIQUE index on that table, which is based on the incoming data. If I was you, rather than using a simple MERGE I'd:
Mark all indexes on the table as UNUSABLE, except for the unique index.
INSERT all records
Catch the DUPLICATE VALUE ON INDEX exception at the time of INSERT and issue the appropriate UPDATE.
DELETE processed rows from the input record.
Commit every N records (1000? 10000? 100000? Your choice...), calling DBMS_STATS.GATHER_TABLE_STATS for the table you've inserted into after each COMMIT.
Best of luck.

Statement-level trigger in Oracle

I can't fully understand how a statement-level trigger works. It executes once for each transaction right? If I have this AFTER INSERT trigger and what it does inside is that it updates one specific column if it meets the condition (ex for column status, UPDATE table_name SET STATUS = "Single" WHERE COLUMN is null).
Are the newly inserted data only the ones get to be affected? Or every data in the table that has this null value in column status. I'll be glad hearing your knowledge about this.
A statement level trigger will fire once after the triggering statement has run, unlike a row level trigger which fires for each affected row.
After statement triggers are generally used to do processing of the set of data - e.g. logging into a table, or running some post-statement processing (usually a procedure).
If you're wanting to update a value in every affected row, then I would advise using a before row level trigger. The update statement in your question would affect all rows where the COLUMN column is null.
Whether a trigger is actually the right thing to use is debatable. However, I would recommend you look at the documentation and also this Oracle-base article to gain a better understanding of how triggers work and when you might use them.

Can this update cause a deadlock in oracle 10g

I came across this update statement and was wondering how the internal working is. It updates a column which also is used in the where clause of the update.
Should this be ideally done in two steps, or does oracle takes care of it automatically?
UPDATE TBL1 SET DATE1=DATE2 WHERE DATE2> DATE1
Oracle takes care of it automatically. Effectively when it runs the update, Oracle performs the following steps:
Queries the table - i.e. evaluate the WHERE clause predicate for each row in the table
For each row that is returned by step 1, update it as per the SET clause. The values of each column are those that were fetched.
For this reason, it is perfectly possible to run an update like this which swaps the values of columns:
UPDATE TBL1 SET DATE1=DATE2, DATE2=DATE1 WHERE DATE2 > DATE1;
The update might be blocked if another session tries to update or delete one of the same rows. Deadlocks are possible but Oracle automatically resolves these by rolling back one of the sessions and raising an exception.

Oracle Gobal Temp table issue

I am using JdbcTemplate and Oracle stored procedure. In oracle store procedure I have a select query in which I have IN clause like 'IN (SELECT ID FROM GLOBAL_TEMP_TABLE)'.
And the definition of temp table is ON COMMIT PRESERVE ROWS.
However, when I am calling stored procedure from java it give me more records than I expected, seems temp table is storing data from previous session. Need your help.
Without looking at any code, it is hard to tell.
Yet, the symptoms you describe might only be caused because you are still accessing your data from the same session.
From Oracle-Base: Global Temporary Tables (GTT):
The ON COMMIT DELETE ROWS clause indicates that the data should be deleted at the end of the transaction.
the ON COMMIT PRESERVE ROWS clause indicates that rows should be preserved until the end of the session.
That is, in your case, you need to close the session to clear the data.
You cannot access data from a previous or other session when you select rows from a global temporary table.
There are 2 options:
Your session is not new
It's not a temporary table
Keep in mind if you use ON COMMIT PRESERVE ROWS you have to delete the rows yourself. The data is kept until the session ends.
To find out if your session is still the same, query is:
select sid,serial,logon_time from v$session
and write it to a log file.

Deleting duplicate rows in oracle

Shouldn't the following query work fine for deleting duplicate rows in oracle
SQL> delete from sessions o
where 1<=(select count(*)
from sessions i
where i.id!=o.id
and o.data=i.data);
It seems to delete all the duplication rows!! (I wish to keep 1 tough)
Your statement doesn't work because your table has at least one row where two different ID's share the same values for DATA.
Although your intent may be to look for differing values of DATA ID by ID, what your SQL is saying is in fact set-based: "Look at my table as a whole. If there are any rows in the table such that the DATA is the same but the ID's are different (i.e., that inner COUNT(*) is anything greater than 0), then DELETE every row in the table."
You may be attempting specific, row-based logic, but your statement is big-picture (set-based). There's nothing in it to single out duplicate rows, as there is in the solution Ollie has linked to, for example.

Resources