Pessimistic lock per page on 2 instances - spring

I have a scheduler which runs on 2 instances, the purpose is to allow the two instances to parallel process the task to gain in time/performance.
I'm using the pessimistic lock on oracle db, with skip locked in order not to block parallel process and allow the lock per page, so that each instance processes a different page.
#Transactional
#Scheduled(cron = ...)
public void process() {
Pageable pageRequest = PageRequest.of(0, 100);
Page<Event> pages = eventRepository.findAll(pageRequest);
while(!pages.isLast()){
pageRequest.next();
pages.forEach(this::processEvent);
pages = eventRepository.findAll(pageRequest);
}
pages.forEach(this::processEvent);
}
EventRepository
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({QueryHint(name= "javax.persistence.lock.timeout", value = "-2")})
Page<Event> findAll(Pageable pageable);
what happens is when the first instance applies the lock (on a page), the second instance can't see anything in the table, and the first continues to process all the pages.
I tried to initiate a new transaction in the service (propagation = REQUIRE_NEW) but in vain.
what is missing so that each instance locks one page in one transaction, and if there is an error, it should rollback only the page which has been processed ?
Thank you in advance

By default, a simple SKIP LOCKED query is going to grab everything it can based on the size of the fetch, eg
Session 1
SQL> create table t as
2 select rownum r from dual
3 connect by level <= 10;
Table created.
SQL>
SQL> select * from t
2 for update skip locked;
R
----------
1
2
3
4
5
6
7
8
9
10
10 rows selected.
Session 2
SQL> select * from t
2 for update skip locked;
no rows selected
Session 1 grabbed (and locked) everything that was fetched, leaving nothing for session 2. If you want concurrent access then you need your code to limit the size of the fetch to something you would consider reasonable for your functionality needs, eg I'll use PLSQL but the same concept applies for any language:
Session 1
SQL> set serverout on
SQL> declare
2 rows_to_process sys.odcinumberlist;
3 cursor C is
4 select * from t
5 for update skip locked;
6 begin
7 open c;
8 fetch c bulk collect into rows_to_process limit 5 ;
9 for i in 1 .. rows_to_process.count loop
10 dbms_output.put_line(rows_to_process(i));
11 end loop;
12 close c;
13 end;
14 /
1
2
3
4
5
Session 2
SQL> declare
2 rows_to_process sys.odcinumberlist;
3 cursor C is
4 select * from t
5 for update skip locked;
6 begin
7 open c;
8 fetch c bulk collect into rows_to_process limit 5 ;
9 for i in 1 .. rows_to_process.count loop
10 dbms_output.put_line(rows_to_process(i));
11 end loop;
12 close c;
13 end;
14 /
6
7
8
9
10

Related

Warning: Trigger created with compilation error

I have two tables, seat_allocation and programme_code. I want to create a trigger for checking if the number of entries into the seat_allocation are equal to the max_seats(column in programm_code table) for that prog_code(column in programme_code table). This is my code:
SQL> create or replace trigger seats_full
2 before insert on seat_allocation
3 for each row
4 declare
5 cnt programme_code.max_seats%type;
6 max programme_code.max_seats%type;
7 begin
8 select count(*) into cnt from seat_allocation where prog_code=(select prog_code from programme_code where prog_code = :NEW.prog_code);
9 select max_seats into max from programme_code where prog_code=(select prog_code from programme_code where prog_code = :NEW.prog_code);
10 if max=cnt then
11 RAISE_APPLICATION_ERROR(-21000,'No vacant seats available');
12 end if;
13 end;
14 /
It gives Warning:Trigger created with compilation error. Can you please help me figure out what's wrong?
Variable name can'be MAX, it is reserved for the function with the same name. Change it to e.g. v_max_seats.
Apart from that, it seems that you're selecting from the seat_allocation table (line #8), while the trigger fires on insert on the same table. It'll cause the mutating table error so - you'll have to do something. Nowadays, it is a compound trigger that fixes that. If your database version doesn't support it, you'll use a package. There are examples on the Internet.
Also, why using a subquery? What's wrong with e.g.
select max_seats into v_max_seats from programme_code where prog_code = :new.prog_code

Access other values ​in a trigger before save Oracle

Is it possible to access the previous values ​​that have not yet been stored in the database?
I have a table related to a particular module (MOD) which I will call table XA.
I can insert multiple records into XA simultaneously they are going to be inserted, I cannot change this fact.
For example, the following data is inserted in XA
ID | ParentId | Type | Name | Value
1 | 1 | 5 | Cost | 20000
2 | 1 | 9 | Risk | 10000
And I need in this case to insert / update a record in this same table. A calculated value
At the moment of executing the trigger, the value with the name of Cost for example is inserted first, and then the value of Risk.
When evaluating the Risk, I must have the ability to know what the Cost value is to make the calculation and insert the calculated record.
I tried to create a Package to which I would feed the data, but I still have the same problem.
create or replace PACKAGE GLOBAL
IS
PRAGMA SERIALLY_REUSABLE;
TYPE arr IS TABLE OF VARCHAR2 (32)
INDEX BY VARCHAR2 (50);
NUMB arr;
END GLOBAL;
//Using in trigger
GLOBAL.NUMB (:NEW.ID || '-' || :NEW.ParentId) := :NEW.Value;
BEGIN
IF :NEW.Type == 9 AND GLOBAL.NUMB (5 || '-' || :NEW.ParentId) IS NOT NULL
THEN
// calculate and insert record
ELSE IF :NEW.Type == 5 AND GLOBAL.NUMB (9 || '-' || :NEW.ParentId) IS NOT NULL
// calculate and insert record
END IF;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
// NOT HAVE TWO INSERT TO SAME REGISTER
END;
Values ​​5 and 9 are for reference.
Both records are not always inserted, one or more can be inserted, even the calculated value can be imputed but must be replaced by the calculation.
And I can't create a view since there is an internal process that depends on this particular table.
Do you really really must store calculated value into a table? That's usually not the best idea as you have to maintain it in any possible case (inserts, updates, deletes).
Therefore, another suggestion: a view. Here's an example; my "calculation" is simple, I'm just subtracting cost - risk as I don't know what you really do. If calculation is very complex and should be run every time on a very large data set, yes - performance might suffer.
Anyway, here you go; see if it helps.
Sample data:
SQL> select * From xa order by parentid, name;
ID PARENTID TYPE NAME VALUE
---------- ---------- ---------- ---- ----------
1 1 5 Cost 20000
2 1 9 Risk 10000
5 4 5 Cost 4000
7 4 9 Risk 800
A view:
SQL> create or replace view v_xa as
2 select id,
3 parentid,
4 type,
5 name,
6 value
7 from xa
8 union all
9 select 0 id,
10 parentid,
11 99 type,
12 'Calc' name,
13 sum(case when type = 5 then value
14 when type = 9 then -value
15 end) value
16 from xa
17 group by parentid;
View created.
What does it contain?
SQL> select * from v_xa
2 order by parentid, type;
ID PARENTID TYPE NAME VALUE
---------- ---------- ---------- ---- ----------
1 1 5 Cost 20000
2 1 9 Risk 10000
0 1 99 Calc 10000
5 4 5 Cost 4000
7 4 9 Risk 800
0 4 99 Calc 3200
6 rows selected.
SQL>

How to return rows based on the database user and the table's contents?

I have a following table:
id name score
1 SYS 4
2 RHWTT 5
3 LEO 4
4 MOD3_ADMIN 5
5 VPD674 4
6 SCOTT 5
7 HR 4
8 OE 5
9 PM 4
10 IX 5
11 SH 4
12 BI 5
13 IXSNEAKY 4
14 DVF 5
I want to create a policy function in Oracle SQL that makes sure of the following things:
If a user(Leo) is executing a select statement on this table, it only gets 3 LEO 4.
sys_dba gets all the results no matter what.
I have given select permissions to Leo on this table created by Scott.
I am getting stuck at writing this complex PL/SQL function. I tried the following and it states compilation errors. Also, I think it does not do what I intend to do:
CREATE FUNCTION no_show_all (
p_schema IN NUMBER(5),
p_object IN VARCHAR2
)
RETURN
AS
BEGIN
RETURN 'select avg(score) from scott.rating';
END;
/
Based on your previous question and info you posted, here's how I understood the question: if you granted select on the whole table to any user, then it is able to fetch all rows from it. You have to further restrict values.
One option - as we're talking about the function - is to use case in where clause.
Here's an example.
Sample data:
SQL> create table rating as
2 select 1 id, 'sys' name, 4 score from dual union all
3 select 3, 'leo' , 3 from dual union all
4 select 6, 'scott' , 5 from dual union all
5 select 7, 'hr' , 2 from dual;
Table created.
Function:
it accepts username as a parameter (mind letter case! In my example, everything is lowercase. In your, perhaps you'll have to use upper function or something like that)
case says: if par_user is equal to sys, let it fetch all rows. Otherwise, fetch only rows whose name column's value is equal to par_user
return the result
So:
SQL> create or replace function f_rating (par_user in varchar2)
2 return number
3 is
4 retval number;
5 begin
6 select avg(score)
7 into retval
8 from rating
9 where name = case when par_user = 'sys' then name
10 else par_user
11 end;
12 return retval;
13 end;
14 /
Function created.
Let's try it:
SQL> select f_rating('sys') rating_sys,
2 f_rating('hr') rating_hr
3 from dual;
RATING_SYS RATING_HR
---------- ----------
3,5 2
SQL>
I suggest creating a view for each user, like so
create view THE_VIEW as select * from TABLE where NAME = user
Then grant access to the view only.
Now it doesn't matter what kind of query a user tries to perform on your table, she will only get one row back.
Of-course the DBA user can access all the table data.

how to generate a deadlock scenario in Oracle?

I've been stuck on a Lab question for the last four hours because I generally don't understand what it wants, even with extensive research and flipping through endless slides. EDIT the prologue in question is a dbcreate.sql which creates a series of tables, and then a dbload.sql which inserts values into given tables.
The given question is
Implement in PL/SQL the database transactions that operate on the sample database created in Prologue step and such that their concurrent processing leads to dadeadlock situation. Save the transactions in SQL scripts solution1-1.sql and solution1-2.sql
I feel someone on this site could explain this in a way I can understand! Thank you for your help
EDIT theres a second part to this question
Simulate a concurrent processing of the transaction such that it will lead to a deadlock.
To simulate a concurrent processing of the database transactions use a PL/SQL procedure
SLEEP from the standard PL/SQL package DBMS_LOCK. By "simulation of concurrent
execution" we mean that the first transaction does a bit of work, then it is delayed for a
certain period of time and in the same period of time another transaction is processed.
Finally, after a delay the first transaction completes its job.
The simplest way (untested code):
CREATE OR REPLACE PROCEDURE doUpd ( id1 IN NUMBER, id2 IN NUMBER ) IS
BEGIN
UPDATE tableA set colA = 'upd1' where id = id1;
dbms_lock.sleep (20);
UPDATE tableA set colA = 'upd2' where id = id2;
END;
/
Then run in session 1:
execute doUpd( 21, 12 );
Immediate in session 2:
execute doUpd( 12, 21 );
What we're doing is updating 2 rows of but is a different order.
We would hope that the time between between the updates would be small enough not avoid a deadlock. But if we want to simulate a deadlock, we need add a delay so that we can fire off the updates in another session.
In the example above, session 1 will update the rows with id = 21 , then wait for 20 seconds, then update the row with id 12.
Session 2 will update the rows with id = 12 , then wait for 20 seconds, then update the row with id 21. If session 2 starts whilst session 1 is 'sleeping' we should get a deadlock.
In time order, provided you are quick with starting the session 2 job, you should be aiming for this:
Session 1: UPDATE tableA set colA = 'upd1' where id = 21;
Session 1: sleep 20
Session 2: UPDATE tableA set colA = 'upd1' where id = 12;
Session 2: sleep 20
Session 1: UPDATE tableA set colA = 'upd2' where id = 12; -- blocked until session 2 commit/rollback
Session 2: UPDATE tableA set colA = 'upd2' where id = 21; -- blocked until session 1 commit/rollback
Session 1 and 2 are now deadlocked.
For the first part of your question you can use also this example without DBMS_LOCK package:
CREATE TABLE T1 (c INTEGER, v INTEGER);
INSERT INTO T1 VALUES (1, 10);
INSERT INTO T1 VALUES (2, 10);
COMMIT;
Open session 1
Open session 2
In session 1 execute update t1 set v = v + 10 where c = 1;
In session 2 execute update t1 set v = v + 10 where c = 2;
In session 1 execute update t1 set v = v + 10 where c = 2;
In session 2 execute update t1 set v = v + 10 where c = 1;
Session 1 raises an ORA-00060: deadlock detected while waiting for resource

Run parallel jobs in oracle

I have a table jobs which is as follows:
JOB_ID STATUS
1 N
2 N
3 N
4 N
5 N
6 N
7 N
8 N
9 N
10 N
11 N
12 N
What I have to do is select 4 job ids at once,set their status to 'Y' and as soon as 1 job is completed another should start running. At any instance, there must be 4 jobs running.
I did research on how to achieve and most of the documents suggested using scheduler jobs. However, I could not figure out how to do it.
Here is a code sample of what I have done:
CREATE OR REPLACE PROCEDURE sp_processTask
(
JobID NUMBER
)
AS
vblSQL VARCHAR2(32767);
vJobID NUMBER;
BEGIN
vJobID:=JobID;
EXECUTE IMMEDIATE'insert into job_logs values('''||vJobID||''',sysdate,sysdate)';
vblSQL:='UPDATE jobs
SET status=''Y''
WHERE job_ID='||vJobID;
EXECUTE IMMEDIATE(vblSQL);
Dbms_Output.put_line(vblSQL);
END;
/
And then passing 4 different job ids at once as follows:
BEGIN
sp_processTask(1);
sp_processTask(2);
sp_processTask(3);
sp_processTask(4);
END;
What should I do to pass another job id as soon as one flag is set to 'Y'? I am using oracle.

Resources