Deadlock in SELECT FOR UPDATE query - oracle

I have table say
TAB1
ID, TARGET, STATE, NEXT
Column ID is the primary key.
The query is that is showing deadlock is similar to this
SELECT *
FROM TAB1
WHERE NEXT = (SELECT MIN(NEXT) FROM TAB1 WHERE TARGET=? AND STATE=?) FOR UPDATE
I did an explain plan I see something like this:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8095 | 6 (0)| 00:00:01 |
| 1 | FOR UPDATE | | | | | |
| 2 | BUFFER SORT | | | | | |
|* 3 | TABLE ACCESS FULL | TAB1 | 1 | 8095 | 3 (0)| 00:00:01 |
| 4 | SORT AGGREGATE | | 1 | 2083 | | |
|* 5 | TABLE ACCESS FULL| TAB1 | 1 | 2083 | 3 (0)| 00:00:01 |
Since the query is doing TABLE ACCESS FULL twice, so I'm suspecting 2 session executing the same query will access the rows in different orders.
Can indexing of columns will help in preventing the deadlock? Say creating an index on NEXT??? Or by changing the PRIMARY to NON CLUSTERED KEY?? Note: Normally, the table will have max 1000 rows.

Addind a non clustered index on the NEXT column would indeed boost your performance and reduce your deadlock issues.

Related

Why there are both filter and access predicates on the same index on this execution plan?

Considering the execution plan for this query :
SQL_ID 1m5r644say02b, child number 0
-------------------------------------
select * from hr.employees where department_id = 80 intersect select *
from hr.employees where first_name like 'A%'
Plan hash value: 1738366820
------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 4 |00:00:00.01 | 8 | | | |
| 1 | INTERSECTION | | 1 | | 4 |00:00:00.01 | 8 | | | |
| 2 | SORT UNIQUE | | 1 | 34 | 34 |00:00:00.01 | 6 | 6144 | 6144 | 6144 (0)|
|* 3 | TABLE ACCESS FULL | EMPLOYEES | 1 | 34 | 34 |00:00:00.01 | 6 | | | |
| 4 | SORT UNIQUE | | 1 | 11 | 10 |00:00:00.01 | 2 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 1 | 11 | 10 |00:00:00.01 | 2 | | | |
|* 6 | INDEX SKIP SCAN | EMP_NAME_IX | 1 | 11 | 10 |00:00:00.01 | 1 | | | |
------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("DEPARTMENT_ID"=80)
6 - access("FIRST_NAME" LIKE 'A%')
filter("FIRST_NAME" LIKE 'A%')
The execution plan has both access and filter predicates with the same '%A' predicate here on the EMP_NAME_IX index. But shouldn't the access predicate be enough here, as they both will filter the same rows? Why did it perform the additional filter predicate?
Is there a general rule for when both access and filter are the same? Based on GV$SQL_PLAN, when an operation has either an access or a filter predicate, they are only equal about 1% of the time. And this situation only happens with operations and options like INDEX (FULL/RANGE/SKIP/UNIQUE) and SORT (JOIN/UNIQUE).
select *
from gv$sql_plan
where access_predicates = filter_predicates;
Presumably you have an index on hr.employees that includes the first_name column. But you are selecting * from hr.employees such that the rows obtained from the index would have to traced back (i.e. join) with the table.
For conceptual understanding it helps to think of indexes as plain tables with a foreign key to the original table's primary key. When usage of indexes helps, these two tables are joined. The index is used alone when it contains all needed columns.
In this case we assume a join is required since you are selecting *. When accessing the hr.employee table for the second query of the intersect, because its where clause filters on an index column, a join to the index is performed prior to filtering.
The first occurrence of "FIRST_NAME" LIKE 'A%' is the reason usage of the index is decided. The second occurrence, is then the actual filtering. Filtering happens only once, not twice.
These are listed as distinct operations as deciding to use the index (and therefore perform the join) has its own costs.

Limit rows examined in Oracle

My table has millions of records. In this query below, can I make Oracle 12c examine the first X rows only instead of doing a full table scan?
The value of X, I imagine should be Offset + Fetch Next , so in this case 15
SELECT * FROM table OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY;
Thanks in advance
Edit 1
These are the tables involved and this is the actual query
Orders - This table has 113k records in my test DB ( and over 8 million in prod db like my original question mentioned)
--------------------------
| Id | SKUField1|SKUField2|
--------------------------
| 1 | Value1 | Value2 |
| 2 | Value2 | Value2 |
| 3 | Value1 | Value3 |
--------------------------
Products - This table has 2 million records in my test DB ( prod db is similar)
---------------
| PId| SKU_NUM|
---------------
| 1 | Value1 |
| 2 | Value2 |
| 3 | Value3 |
---------------
Note that values of Orders.SKUField1 and Orders.SKUField2 come from the Products.SKU_NUM values
Actual Query:
SELECT /*+ gather_plan_statistics */ Id, PId, SKUField1, SKUField2, SKU_NUM
FROM Orders
LEFT JOIN (
-- this inner query reduces size of Products from 2 million rows down to 1462 rows
select * from Products where SKU_NUM in (
select SKUField1 from Orders
)
) p1 ON SKUField1 = p1.SKU_NUM
LEFT JOIN (
-- this inner query reduces size of table B from 2 million rows down to 459 rows
select * from Products where SKU_NUM in (
select SKUField2 from Orders
)
) p4 ON SKUField2 = p4.SKU_NUM
OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY
Execution Plan:
--------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Time | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 10 |00:00:00.06 | 8013 | | | |
|* 1 | VIEW | | 1 | 00:00:01 | 10 |00:00:00.06 | 8013 | | | |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 00:00:01 | 15 |00:00:00.06 | 8013 | 27M| 1904K| |
|* 3 | HASH JOIN RIGHT OUTER | | 1 | 00:00:01 | 15 |00:00:00.06 | 8013 | 1162K| 1162K| 1344K (0)|
| 4 | VIEW | | 1 | 00:00:01 | 1462 |00:00:00.04 | 6795 | | | |
| 5 | NESTED LOOPS | | 1 | 00:00:01 | 1462 |00:00:00.04 | 6795 | | | |
| 6 | NESTED LOOPS | | 1 | 00:00:01 | 1462 |00:00:00.04 | 5333 | | | |
| 7 | SORT UNIQUE | | 1 | 00:00:01 | 1469 |00:00:00.04 | 3010 | 80896 | 80896 |71680 (0)|
| 8 | TABLE ACCESS FULL | Orders | 1 | 00:00:01 | 113K|00:00:00.02 | 3010 | | | |
|* 9 | INDEX UNIQUE SCAN | UIX_Product_SKU_NUM | 1469 | 00:00:01 | 1462 |00:00:00.01 | 2323 | | | |
| 10 | TABLE ACCESS BY INDEX ROWID | Products | 1462 | 00:00:01 | 1462 |00:00:00.01 | 1462 | | | |
|* 11 | HASH JOIN RIGHT OUTER | | 1 | 00:00:01 | 15 |00:00:00.02 | 1218 | 1142K| 1142K| 1335K (0)|
| 12 | VIEW | | 1 | 00:00:01 | 459 |00:00:00.02 | 1213 | | | |
| 13 | NESTED LOOPS | | 1 | 00:00:01 | 459 |00:00:00.02 | 1213 | | | |
| 14 | NESTED LOOPS | | 1 | 00:00:01 | 459 |00:00:00.02 | 754 | | | |
| 15 | SORT UNIQUE | | 1 | 00:00:01 | 462 |00:00:00.02 | 377 | 24576 | 24576 |22528 (0)|
| 16 | INDEX FAST FULL SCAN | Orders_SKUField2_IDX6 | 1 | 00:00:01 | 113K|00:00:00.01 | 377 | | | |
|* 17 | INDEX UNIQUE SCAN | UIX_Product_SKU_NUM | 462 | 00:00:01 | 459 |00:00:00.01 | 377 | | | |
| 18 | TABLE ACCESS BY INDEX ROWID| Products | 459 | 00:00:01 | 459 |00:00:00.01 | 459 | | | |
| 19 | TABLE ACCESS FULL | Orders | 1 | 00:00:01 | 15 |00:00:00.01 | 5 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------------
Hence, based on the "A-Rows" column values for row Ids 8 and 16 in the execution plan, it seems like there are full table scans on the Orders table (though row id 16 atleast seems to be using an index). So my question is is it true that there is a full table scan on the orders table even though I am using Offset/Fetch Next
Although your FETCH clause may use a full table scan, Oracle will still only fetch the first X rows from the table.
In the following example, the "TABLE ACCESS FULL" operation does start to read the entire table, but it gets cutoff part of the way through by the "WINDOW NOSORT STOPKEY" operation. Not all full table scans actually scan the full table. You would see similar behavior if your code ended with WHERE ROWNUM <= 50.
CREATE TABLE some_table AS SELECT * FROM all_objects;
EXPLAIN PLAN FOR SELECT * FROM some_table OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY;
SELECT * FROM TABLE(dbms_xplan.display);
Plan hash value: 2559837639
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 15 | 7410 | 2 (0)| 00:00:01 |
|* 1 | VIEW | | 15 | 7410 | 2 (0)| 00:00:01 |
|* 2 | WINDOW NOSORT STOPKEY| | 15 | 2010 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | SOME_TABLE | 15 | 2010 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=15 AND
"from$_subquery$_002"."rowlimit_$$_rownumber">5)
2 - filter(ROW_NUMBER() OVER ( ORDER BY NULL )<=15)
The performance implications get more complicated if you also want to order the results. If that is the case, you may want to post the full query and execution plan.
(EDIT: 2022-09-25)
Yes, there is a full table scan on the ORDERS table happening on line 8 of the execution plan. As you mentioned, you can look at the "A-rows" column to tell what's really happening.
But the third full table scan of ORDERS, on line 19, is not a "full" full table scan. The operation "WINDOW NOSORT STOPKEY" stops that full table scan as soon as the 15 necessary rows are read. So the FETCH syntax is helping at least a little.
Applying a FETCH to a query does not mean that every single table will be limited. Although, in your query, it does seem like there ought to be a way to reduce the full table scans. Perhaps an index on SKUField1 would help?
Since Oracle as I know don't provide something like limit or top you can created by yourself like the following:
what is happening here, the inner query gets all the first 10 records and the outer query get those, you can still use any clauses like where or order or any others
SELECT * FROM (
SELECT * FROM Customers WHERE CustomerID <= 10 ORDER BY CustomerID
)
The full article will be found about this topic here at Oracle-Fetch
I am using Online Oracle so you can try it from your end, please let me know if you still have a problem.

Oracle in-memory column store is not improving SELECT query?

I have a SELECT statement
SELECT MIN(C_PRICE), MAX(C_PRICE)
FROM CAR;
I run the statement and create a processing plan to look at the cost.
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 | 12150 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 3 | | |
| 2 | TABLE ACCESS FULL| CAR | 1800K| 5273K| 12150 (1)| 00:00:01 |
-------------------------------------------------------------------------------
I created an inmemory to this table car after setting the inmemory size to be 200M.
ALTER TABLE CAR INMEMORY;
The result
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 | 12150 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 3 | | |
| 2 | TABLE ACCESS INMEMORY FULL| CAR | 1800K| 5273K| 12150 (1)| 00:00:01 |
----------------------------------------------------------------------------------------
My question is why isn't the query improved after altering the table to be in the inmemory? The SELECT statement clearly shows that it is accessing the table via inmemory. I thought inmemory creation will improve the query processing thus reducing the cost?
Two things you should look at. One, is the table populated in the IM column store (i.e. v$im_segments)? Second, what was the time difference between the two queries and where was the time spent? SQL Monitor active reports are excellent for determining this.

Materialized view is not showing up in plan table output from explain plan statement spool?

I have created a materialized view on SH2 but, when I run my explain plan statement I can't see the materialized view being used in the plan table output. I'm not sure if it's a more complex materialized view with additional key columns to join to other dimensions, so I'm a bit confused why the materialized view isn't being utilized as I'm refering to the SH2 prefix in my select query.
CREATE MATERIALIZED VIEW fweek_pscat_sales_mv
PCTFREE 5
BUILD IMMEDIATE
REFRESH COMPLETE
ENABLE QUERY REWRITE
AS
SELECT t.week_ending_day
, p.prod_subcategory
, sum(s.amount_sold) AS Money
, s.channel_id
, s.promo_id
FROM sales s
, times t
, products p
WHERE s.time_id = t.time_id
AND s.prod_id = p.prod_id
GROUP BY t.week_ending_day
, p.prod_subcategory
, s.channel_id
, s.promo_id;
CREATE BITMAP INDEX FW_PSC_S_MV_SUBCAT_BIX
ON fweek_pscat_sales_mv(prod_subcategory);
CREATE BITMAP INDEX FW_PSC_S_MV_CHAN_BIX
ON fweek_pscat_sales_mv(channel_id);
CREATE BITMAP INDEX FW_PSC_S_MV_PROMO_BIX
ON fweek_pscat_sales_mv(promo_id);
CREATE BITMAP INDEX FW_PSC_S_MV_WD_BIX
ON fweek_pscat_sales_mv(week_ending_day);
spool &data_dir.EXP_query_on_SH2_2.txt
alter session set query_rewrite_integrity = TRUSTED;
alter session set query_rewrite_enabled = TRUE;
set timing on
EXPLAIN PLAN FOR
SELECT t.week_ending_day
, p.prod_subcategory
, sum(s.amount_sold) AS Money
, s.channel_id
, s.promo_id
FROM SH2.sales s
, SH2.times t
, SH2.products p
WHERE s.time_id = t.time_id
AND s.prod_id = p.prod_id
GROUP BY t.week_ending_day
, p.prod_subcategory
, s.channel_id
, s.promo_id;
REM Now Let us Display the Output of the Explain Plan
SET pagesize 9999
set linesize 250
set markup html preformat on
select * from table(dbms_xplan.display());
set linesize 80
spool off
----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1016K| 60M| | 17365 (1)| 00:00:01 | | |
| 1 | HASH GROUP BY | | 1016K| 60M| 70M| 17365 (1)| 00:00:01 | | |
|* 2 | HASH JOIN | | 1016K| 60M| | 2178 (1)| 00:00:01 | | |
| 3 | VIEW | index$_join$_003 | 10000 | 224K| | 74 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | | | | | | | |
| 5 | INDEX FAST FULL SCAN | PRODUCTS_PK | 10000 | 224K| | 41 (0)| 00:00:01 | | |
| 6 | INDEX FAST FULL SCAN | PRODUCTS_PROD_SUBCAT_IX | 10000 | 224K| | 51 (0)| 00:00:01 | | |
|* 7 | HASH JOIN | | 1016K| 37M| | 2101 (1)| 00:00:01 | | |
| 8 | PART JOIN FILTER CREATE | :BF0000 | 1016K| 37M| | 2101 (1)| 00:00:01 | | |
| 9 | TABLE ACCESS FULL | TIMES | 1461 | 23376 | | 13 (0)| 00:00:01 | | |
| 10 | PARTITION RANGE JOIN-FILTER| | 1016K| 22M| | 2086 (1)| 00:00:01 |:BF0000|:BF0000|
| 11 | TABLE ACCESS FULL | SALES | 1016K| 22M| | 2086 (1)| 00:00:01 |:BF0000|:BF0000|
----------------------------------------------------------------------------------------------------------------------------------
Why would you expect reference to the MV from the baseline query. For that to happen Oracle would have to compare the query to all MVs to find a match. Even further it would require every query be compared to every MV. MVs are typically created to avoid running the baseline query by accessing the MV directly (that is why MVs are the stored results of a query). If you want the MV just select from it directly.
SELECT week_ending_day
, prod_subcategory
, Money
, channel_id
, promo_id
from fweek_pscat_sales_mv;

Top N query performance when accessing a list of IDs

I have a top N query that is giving me problems.
First of all, I have a query like the following:
select /*+ gather_plan_statistics */ * from
(
select rowid
from payer_subscription ps
where ps.subscription_status = :i_subscription_status
and ps.merchant_id = :merchant_id2
order by transaction_date desc
) where rownum <= :i_rowcount;
This query works well. It can very efficiently find me the top 10 rows for a massive data set, using an index on merchant_id, subscription_status, transaction_date.
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 10 |00:00:00.01 | 4 |
|* 1 | COUNT STOPKEY | | 1 | | 10 |00:00:00.01 | 4 |
| 2 | VIEW | | 1 | 11 | 10 |00:00:00.01 | 4 |
|* 3 | INDEX RANGE SCAN DESCENDING| SODTEST2_IX | 1 | 100 | 10 |00:00:00.01 | 4 |
-------------------------------------------------------------------------------------------------------
As you can see the estimated actual rows at each stage are 10, which is correct.
Now, I have a requirement to get the top N records for a set of merchant_Ids, so if I change the query to include two merchant_ids, the performance tanks:
select /*+ gather_plan_statistics */ * from
(
select rowid
from payer_subscription ps
where ps.subscription_status = :i_subscription_status
and (ps.merchant_id = :merchant_id or
ps.merchant_id = :merchant_id2 )
order by transaction_date desc
) where rownum <= :i_rowcount;
----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 10 |00:00:00.17 | 178 | | | |
|* 1 | COUNT STOPKEY | | 1 | | 10 |00:00:00.17 | 178 | | | |
| 2 | VIEW | | 1 | 200 | 10 |00:00:00.17 | 178 | | | |
|* 3 | SORT ORDER BY STOPKEY| | 1 | 200 | 10 |00:00:00.17 | 178 | 2048 | 2048 | 2048 (0)|
| 4 | INLIST ITERATOR | | 1 | | 42385 |00:00:00.10 | 178 | | | |
|* 5 | INDEX RANGE SCAN | SODTEST2_IX | 2 | 200 | 42385 |00:00:00.06 | 178 | | | |
----------------------------------------------------------------------------------------------------------------------------
Notice now that there are 42K rows coming out of the two index range scans - Oracle is no longer aborting the index range scan when it reaches 10 rows. What I thought would happen, is that Oracle would get at most 10 rows for each merchant_id, knowing that at most 10 rows are to be returned by the query. Then it would sort that 10 + 10 rows and output the top 10 based on the transaction date, but it refuses to do that.
Does anyone know how I can get the performance of the first query, when I need to pass a list of merchants into the query? I could probably get the performance using a union all, but the list of merchants is variable, and could be anywhere between 1 or 2 to several 100.
You can use --+ use_concat hint to make Oracle execute query as if it was a UNION ALL.
From documentation:
The USE_CONCAT hint instructs the optimizer to transform combined
OR-conditions in the WHERE clause of a query into a compound query
using the UNION ALL set operator. Without this hint, this
transformation occurs only if the cost of the query using the
concatenations is cheaper than the cost without them. The USE_CONCAT
hint overrides the cost consideration.
There are many cases where use_concat is ignored.
See: MOS Note: USE_CONCAT hint on different versions (Doc ID 259741.1)
I have had success in 10.2.0.4, 11.2.0.1 with OR_EXPAND where USE_CONCAT will not work.
/*+ OR_EXPAND( alias column_name ) */
Documented here:
http://www.hellodba.com/reader.php?ID=199&lang=EN
I'm not sure if this helps, but you cah try to replace the OR operator with IN:
and ps.merchant_id IN (:merchant_id, :merchant_id2)

Resources