how to generate a deadlock scenario in Oracle? - oracle

I've been stuck on a Lab question for the last four hours because I generally don't understand what it wants, even with extensive research and flipping through endless slides. EDIT the prologue in question is a dbcreate.sql which creates a series of tables, and then a dbload.sql which inserts values into given tables.
The given question is
Implement in PL/SQL the database transactions that operate on the sample database created in Prologue step and such that their concurrent processing leads to dadeadlock situation. Save the transactions in SQL scripts solution1-1.sql and solution1-2.sql
I feel someone on this site could explain this in a way I can understand! Thank you for your help
EDIT theres a second part to this question
Simulate a concurrent processing of the transaction such that it will lead to a deadlock.
To simulate a concurrent processing of the database transactions use a PL/SQL procedure
SLEEP from the standard PL/SQL package DBMS_LOCK. By "simulation of concurrent
execution" we mean that the first transaction does a bit of work, then it is delayed for a
certain period of time and in the same period of time another transaction is processed.
Finally, after a delay the first transaction completes its job.

The simplest way (untested code):
CREATE OR REPLACE PROCEDURE doUpd ( id1 IN NUMBER, id2 IN NUMBER ) IS
BEGIN
UPDATE tableA set colA = 'upd1' where id = id1;
dbms_lock.sleep (20);
UPDATE tableA set colA = 'upd2' where id = id2;
END;
/
Then run in session 1:
execute doUpd( 21, 12 );
Immediate in session 2:
execute doUpd( 12, 21 );
What we're doing is updating 2 rows of but is a different order.
We would hope that the time between between the updates would be small enough not avoid a deadlock. But if we want to simulate a deadlock, we need add a delay so that we can fire off the updates in another session.
In the example above, session 1 will update the rows with id = 21 , then wait for 20 seconds, then update the row with id 12.
Session 2 will update the rows with id = 12 , then wait for 20 seconds, then update the row with id 21. If session 2 starts whilst session 1 is 'sleeping' we should get a deadlock.
In time order, provided you are quick with starting the session 2 job, you should be aiming for this:
Session 1: UPDATE tableA set colA = 'upd1' where id = 21;
Session 1: sleep 20
Session 2: UPDATE tableA set colA = 'upd1' where id = 12;
Session 2: sleep 20
Session 1: UPDATE tableA set colA = 'upd2' where id = 12; -- blocked until session 2 commit/rollback
Session 2: UPDATE tableA set colA = 'upd2' where id = 21; -- blocked until session 1 commit/rollback
Session 1 and 2 are now deadlocked.

For the first part of your question you can use also this example without DBMS_LOCK package:
CREATE TABLE T1 (c INTEGER, v INTEGER);
INSERT INTO T1 VALUES (1, 10);
INSERT INTO T1 VALUES (2, 10);
COMMIT;
Open session 1
Open session 2
In session 1 execute update t1 set v = v + 10 where c = 1;
In session 2 execute update t1 set v = v + 10 where c = 2;
In session 1 execute update t1 set v = v + 10 where c = 2;
In session 2 execute update t1 set v = v + 10 where c = 1;
Session 1 raises an ORA-00060: deadlock detected while waiting for resource

Related

Pessimistic lock per page on 2 instances

I have a scheduler which runs on 2 instances, the purpose is to allow the two instances to parallel process the task to gain in time/performance.
I'm using the pessimistic lock on oracle db, with skip locked in order not to block parallel process and allow the lock per page, so that each instance processes a different page.
#Transactional
#Scheduled(cron = ...)
public void process() {
Pageable pageRequest = PageRequest.of(0, 100);
Page<Event> pages = eventRepository.findAll(pageRequest);
while(!pages.isLast()){
pageRequest.next();
pages.forEach(this::processEvent);
pages = eventRepository.findAll(pageRequest);
}
pages.forEach(this::processEvent);
}
EventRepository
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({QueryHint(name= "javax.persistence.lock.timeout", value = "-2")})
Page<Event> findAll(Pageable pageable);
what happens is when the first instance applies the lock (on a page), the second instance can't see anything in the table, and the first continues to process all the pages.
I tried to initiate a new transaction in the service (propagation = REQUIRE_NEW) but in vain.
what is missing so that each instance locks one page in one transaction, and if there is an error, it should rollback only the page which has been processed ?
Thank you in advance
By default, a simple SKIP LOCKED query is going to grab everything it can based on the size of the fetch, eg
Session 1
SQL> create table t as
2 select rownum r from dual
3 connect by level <= 10;
Table created.
SQL>
SQL> select * from t
2 for update skip locked;
R
----------
1
2
3
4
5
6
7
8
9
10
10 rows selected.
Session 2
SQL> select * from t
2 for update skip locked;
no rows selected
Session 1 grabbed (and locked) everything that was fetched, leaving nothing for session 2. If you want concurrent access then you need your code to limit the size of the fetch to something you would consider reasonable for your functionality needs, eg I'll use PLSQL but the same concept applies for any language:
Session 1
SQL> set serverout on
SQL> declare
2 rows_to_process sys.odcinumberlist;
3 cursor C is
4 select * from t
5 for update skip locked;
6 begin
7 open c;
8 fetch c bulk collect into rows_to_process limit 5 ;
9 for i in 1 .. rows_to_process.count loop
10 dbms_output.put_line(rows_to_process(i));
11 end loop;
12 close c;
13 end;
14 /
1
2
3
4
5
Session 2
SQL> declare
2 rows_to_process sys.odcinumberlist;
3 cursor C is
4 select * from t
5 for update skip locked;
6 begin
7 open c;
8 fetch c bulk collect into rows_to_process limit 5 ;
9 for i in 1 .. rows_to_process.count loop
10 dbms_output.put_line(rows_to_process(i));
11 end loop;
12 close c;
13 end;
14 /
6
7
8
9
10

2 users running a stored procedure concurrently - what happens if DML statements in one execution affect tests/conditions in the parallel execution?

Let's say I have a PL/SQL stored procedure for creating sales orders as follows:
CREATE OR REPLACE PROCEDURE save_order
(
o_output_status OUT BINARY_INTEGER,
i_customer IN VARCHAR2,
i_product_code IN VARCHAR2,
i_quantity IN BINARY_INTEGER
)
IS
l_stock_available BINARY_INTEGER;
BEGIN
o_output_status := -1;
SELECT available
INTO l_stock_available
FROM stock
WHERE product = i_product_code
;
IF l_stock_available >= i_quantity THEN
INSERT INTO sales_orders (order_number, customer, product, quantity)
VALUES (order_seq.nextval, i_customer, i_product_code, i_quantity)
;
-- hello
UPDATE stock
SET available = available - i_quantity
WHERE product = i_product_code
;
o_output_status := 0;
END IF;
END save_order;
I think that's all pretty straightforward. But what I'd like to know is what happens when 2 users run this stored procedure concurrently. Let's say there's only 1 unit of some product left. If user1 runs the stored procedure first, attempting to create an order for 1 unit, l_stock_available gets a value of 1, the IF condition evaluates to true and the INSERT and UPDATE get executed.
Then user2 runs the stored procedure a moment later, also trying to create an order for 1 unit. Let's say the SELECT INTO for user2 gets executed by Oracle at the instant that user1's execution has reached the comment hello. At this point, user2 will also get a value of 1 for l_stock_available, the IF condition will evaluate to true, and the INSERT and UPDATE will be executed, bringing the stock level down to -1.
Do I understand correctly? Is this what will happen? How can I avoid this scenario, where 2 orders can be created for the last item in stock?
Yes, you understand correctly that the code as written has a race condition.
The simplest fix, assuming that performance requirements permit pessimistic locking, is to add a FOR UPDATE to the initial SELECT statement. That locks the particular row in the STOCKS table which would cause the second session to block until the first session's transaction either commits or rolls back. The second session would then see that the stock on hand had been decreased to 0.
If you want to implement DIY optimistic locking (using the similar mechanisms as e.g. JPA uses) add a VERSION column to your stock table with data type INT initialized with zero.
The initial selectof your procedure returns not only the available quantity but also the current VERSION:
select available, version
into l_available,l_version
from stock
where product_id = i_product_code;
If the table has the required quantity you first UPDATE it, but only if the VERSION has the same value as returned from the previous query.
update stock
set available = available - i_quantity,
version = version + 1
where product_id = 1 and
/* optimistick locking */
version = l_version;
Note that 1) the update will fail if the version does not match the value you selected.
If the update succeeds you increase the version to block others from performing an concurrent UPDATE.
The last step is to check if the update was succesful and if so you process the quantity you got.
rn := SQL%rowcount;
if rn = 1 then /* success */
insert into sales_orders (thread_id, product_id,quantity,create_ts)
values(i_thread_id, i_product_code,i_quantity,current_timestamp);
commit;
end if;
If the UPDATE changed zero rows (if failed due to changed VERSION) you should either retry of return a failure.

PLSQL Oracle 12c deadlock, why is an SSX Table lock aquired for independent deletes?

I have the following two queries resulting in a deadlock. But do not know why Oracle tries to make an SSX Table lock in this scenario.
All test samples tried to replicate the problem, do only row locking.
------------Blocker(s)----------- ------------Waiter(s)------------
Resource Name process session holds waits serial process session holds waits serial
TM-000386AF-00000000-00000000-00000000 101 298 SX SSX 65474 27 646 SX SSX 21533
TM-000386AF-00000000-00000000-00000000 27 646 SX SSX 21533 101 298 SX SSX 65474
Query:
DELETE FROM VERSANDPALETTE V WHERE V.ID IN (SELECT COLUMN_VALUE FROM TABLE(:B1 ))
----- Information for the OTHER waiting sessions -----
Session 646:
DELETE FROM VERSANDPALETTE WHERE ID IN (SELECT * FROM TABLE(:B1 )) AND ID NOT IN (SELECT * FROM TABLE(:B2 ))
I would expect, that the independent rowsets are deleted and no table lock is created.
Does someone has a hint on why that could happen?
EDIT 2: (Simplified Version of the question, 2 min to replicate)
Thanks for your help!
I used this code to test it further:
-- setup
create table p ( x int primary key );
create table c ( x references p );
insert into p select rownum from dual connect by level <= 10;
insert into c select * from p;
commit;
-- 2 session test
-- session 1
update c set x = 2 where x = 1;
-- session 2
update c set x = 4 where x = 3;
delete from p where x = 3;
-- session 1
delete from p where x = 1;
-- deadlock is happening now
-- rollback both sessions
This leads to a deadlock as expected, because there is no index on the child table fk. (as you pointed out to me)
What confuses me is, when only one session is used that there are only locked_mode 3 v$locked_object open. There should be a locked_mode 5 row somewhere.
-- 1 session test
update c set x = 2 where x = 1;
update c set x = 4 where x = 3;
delete from p where x = 3;
delete from p where x = 1;
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine,
a.locked_mode
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;
-- no locked_mode 5 entries...
-- rollback the session
Adding an index resolves the problem:
CREATE INDEX c_index ON c(x);
-- 2 session test
-- session 1
update c set x = 2 where x = 1;
-- session 2
update c set x = 4 where x = 3;
delete from p where x = 3;
-- session 1
delete from p where x = 1;
-- deadlock is not happening :)
So I guess there is some lock escalation going on? Because the single session test does not aquire the same table lock.
As krokodilko says, do you have any dependent tables with a foreign key and the on delete cascade option? The SSX lock is to prevent inserts to the child table for a parent table that has had a row deleted.
See: https://asktom.oracle.com/pls/apex/asktom.search?tag=deadlock-on-two-delete-statements

Sequential vs parallel solution

I will try to present my problem as simplified as possible.
Assume that we have 3 tables in Oracle 11g.
Persons (person_id, name, surname, status, etc )
Actions (action_id, person_id, action_value, action_date, calculated_flag)
Calculations (calculation_id, person_id,computed_value,computed_date)
What I want is for each person that meets certain criteria (let's say status=3)
I should get the sum of action_values from the Actions table where calculated_flag=0. (something like this select sum(action_value) from Actions where calculated_flag=0 and person_id=current_id).
Then I shall use that sum in a some kind of formula and update the Calculations table for that specific person_id.
update Calculations set computed_value=newvalue, computed_date=sysdate
where person_id=current_id
After that calculated_flag for participated rows will be set to 1.
update Actions set calculated_flag=1
where calculated_flag=0 and person_id=current_id
Now this can be easily done sequentially, by creating a cursor that will run through Persons table and then execute each action needed for the specific person.
(I don't provide the code for the sequential solution as the above is just an example that resembles my real-world setup.)
The problem is that we are talking about quite big amount of data and sequential approach seems like a waste of computational time.
It seems to me that this task could be performed in parallel for number of person_ids.
So the question is:
Can this kind of task be performed using parallelization in PL/SQL?
What would the solution look like? That is, what special packages (e.g. DBMS_PARALLEL_EXECUTE), keywords (e.g. bulk collect), methods should be used and in what manner?
Also, should I have any concerns about partial failure of parallel updates?
Note that I am not quite familiar with parallel programming with PL/SQL.
Thanks.
Edit 1.
Here my pseudo code for my sequential solution
procedure sequential_solution is
cursor persons_of_interest is
select person_id from persons
where status = 3;
tempvalue number;
newvalue number;
begin
for person in persons_of_interest
loop
begin
savepoint personsp;
--step 1
select sum(action_value) into tempvalue
from actions
where calculated_flag = 0
and person_id = person.person_id;
newvalue := dosomemorecalculations(tempvalue);
--step 2
update calculations set computed_value = newvalue, computed_date = sysdate
where person_id = person.person_id;
--step 3
update actions set calculated_flag = 1;
where calculated_flag = 0 and person_id = person.person_id;
--step 4 (didn't mention this step before - sorry)
insert into actions
( person_id, action_value, action_date, calculated_flag )
values
( person.person_id, 100, sysdate, 0 );
exception
when others then
rollback to personsp;
-- this call is defined with pragma AUTONOMOUS_TRANSACTION:
log_failure(person_id);
end;
end loop;
end;
Now, how would I speed up the above either with forall and bulk colletct or with parallel programming Under the following constrains:
proper memory management (taking into consideration large amount of data)
For a single person if one part of the step sequence fails - all steps should be rolled back and the failure logged.
I can propose the following. Let's say you have 1 000 000 rows in persons table, and you want to process 10 000 persons per iteration. So you can do it in this way:
declare
id_from persons.person_id%type;
id_to persons.person_id%type;
calc_date date := sysdate;
begin
for i in 1 .. 100 loop
id_from := (i - 1) * 10000;
id_to := i * 10000;
-- Updating Calculations table, errors are logged into err$_calculations table
merge into Calculations c
using (select p.person_id, sum(action_value) newvalue
from Actions a join persons p on p.person_id = a.person_id
where a.calculated_flag = 0
and p.status = 3
and p.person_id between id_from and id_to
group by p.person_id) s
on (s.person_id = c.person_id)
when matched then update
set c.computed_value = s.newvalue,
c.computed_date = calc_date
log errors into err$_calculations reject limit unlimited;
-- updating actions table only for those person_id which had no errors:
merge into actions a
using (select distinct p.person_id
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to)
on (c.person_id = p.person_id)
when matched then update
set a.calculated_flag = 1;
-- inserting list of persons for who calculations were successful
insert into actions (person_id, action_value, action_date, calculated_flag)
select distinct p.person_id, 100, calc_date, 0
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to;
commit;
end loop;
end;
How it works:
You split the data in persons table into chunks about 10000 rows (depends on gaps in numbers of ID's, max value of i * 10000 should be knowingly more than maximal person_id)
You make a calculation in the MERGE statement and update the Calculations table
LOG ERRORS clause prevents exceptions. If an error occurs, the row with the error will not be updated, but it will be inserted into a table for errors logging. The execution will not be interrupted. To create this table, execute:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG('CALCULATIONS');
end;
The table err$_calculations will be created. More information about DBMS_ERRLOG package see in the documentation.
The second MERGE statement sets calculated_flag = 1 only for rows, where no errors occured. INSERT statement inserts the these rows into actions table. These rows could be found just with the select from Calculations table.
Also, I added variables id_from and id_to to calculate ID's range to update, and the variable calc_date to make sure that all rows updated in first MERGE statement could be found later by date.

Syntax Error In Oracle Function

I'm trying to make a function that does a simple insert into a table called poli, the purpose of this fuction:
returns 1 when it inserts the values to the table
in any other case it returns 0.
This is the code in oracle that i wrote:
CREATE OR REPLACE FUNCTION ADDPOLI
( ID IN NUMBER, NAME IN VARCHAR2 , LON IN FLOAT , LAT IN FLOAT , STATUS OUT NUMBER )
return status
IS cursor poli_count is select count(id) from poli;
BEGIN
declare number_of_cities int;
fetch poli_c into number_of_cities;
if number_of_cities<= 15 and number_of_cities>=0 then
insert into poli values(id,name,lat,lon);
return 1;
else
return 0;
end if;
END ADDPOLI;
i have a syntax error here: fetch poli_c into number_of_cities;
how can i fix it ?
Why you are using cursor to achieve this. Try below -
CREATE FUNCTION ADDPOLI(ID INT, NAME VARCHAR(255), LON FLOAT, LAT FLOAT)
RETURNS INT
BEGIN
declare number_of_cities int;
select count(id) into number_of_cities from poli;
if number_of_cities between 0 and 15 then
insert into poli values(id,name,lat,lon);
return 1;
else
return 0;
end if;
END
There is something more fundamentally of concern here. What happens when you deploy this function in a multi-user environment (which most databases typically will run in).
The logic of:
"Do I have less than 15 cities?"
"Yes, insert another row"
is more complex than first appears. Because if I have 10 sessions all currently running this function, you can end up with the following scenario:
I start with say 13 rows. Then this happens:
Session 1: Is there less than 15? Yes, do the insert.
Session 2: Is there less than 15? Yes, do the insert.
Session 3: Is there less than 15? Yes, do the insert.
Session 4: Is there less than 15? Yes, do the insert.
Session 5: Is there less than 15? Yes, do the insert.
...
and now Session 1 commits, and so forth for Session 2, 3, ....
And hence voila! You now have 18 rows in your table and everyone is befuddled as to how this happened.
Ultimately, what you are after is a means of enforcing a rule about the data ("max of 15 rows in table X"). There is a lengthy discussion about the complexities of doing that over at AskTOM
https://asktom.oracle.com/pls/asktom/asktom.search?tag=declarative-integrity

Resources