I have a procedure where a create loans for clients. Clients can get only 10000 max.
I first read the amount requested by the client on old loans, if the client has requested more than 10000 I raise an error else I insert a new loan.
create or replace NONEDITIONABLE PROCEDURE CREATE_LOAN
(
p_client_id INT,
p_requested_loan_amount DECIMAL
)
IS
v_available_amount DECIMAL DEFAULT 0;
BEGIN
--Get available amount for the client( Clients cant require more thant 10000)
SELECT 10000 - COALESCE(SUM(l.amount), 0) INTO v_available_amount
FROM loans l
WHERE l.client_id = p_client_id
AND (l.state_id = 1 OR l.state_id = 2);
--If the cliente requested more than 10000 in older loans raise error
IF p_requested_loan_amount > v_available_amount
THEN
raise_application_error( -20001, 'Not enough available amount.' );
END IF;
--Else insert new loan
INSERT INTO loans (amount, state_id, client_id)
VALUES(p_requested_loan_amount, 1, p_client_id);
END;
How can I prevent two concurrent transaction from reading the old loans at the same time and both beliving that they have available amount before they insert. If I were using Sql Server I could increase the isolation level and that would prevent the problem, but it does not work the same way on Oracle.
In your SELECT statement, use the FOR UPDATE clause. This will get a row lock and stop another transaction from locking that row too.
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4530093713805
Related
Let's say I have a PL/SQL stored procedure for creating sales orders as follows:
CREATE OR REPLACE PROCEDURE save_order
(
o_output_status OUT BINARY_INTEGER,
i_customer IN VARCHAR2,
i_product_code IN VARCHAR2,
i_quantity IN BINARY_INTEGER
)
IS
l_stock_available BINARY_INTEGER;
BEGIN
o_output_status := -1;
SELECT available
INTO l_stock_available
FROM stock
WHERE product = i_product_code
;
IF l_stock_available >= i_quantity THEN
INSERT INTO sales_orders (order_number, customer, product, quantity)
VALUES (order_seq.nextval, i_customer, i_product_code, i_quantity)
;
-- hello
UPDATE stock
SET available = available - i_quantity
WHERE product = i_product_code
;
o_output_status := 0;
END IF;
END save_order;
I think that's all pretty straightforward. But what I'd like to know is what happens when 2 users run this stored procedure concurrently. Let's say there's only 1 unit of some product left. If user1 runs the stored procedure first, attempting to create an order for 1 unit, l_stock_available gets a value of 1, the IF condition evaluates to true and the INSERT and UPDATE get executed.
Then user2 runs the stored procedure a moment later, also trying to create an order for 1 unit. Let's say the SELECT INTO for user2 gets executed by Oracle at the instant that user1's execution has reached the comment hello. At this point, user2 will also get a value of 1 for l_stock_available, the IF condition will evaluate to true, and the INSERT and UPDATE will be executed, bringing the stock level down to -1.
Do I understand correctly? Is this what will happen? How can I avoid this scenario, where 2 orders can be created for the last item in stock?
Yes, you understand correctly that the code as written has a race condition.
The simplest fix, assuming that performance requirements permit pessimistic locking, is to add a FOR UPDATE to the initial SELECT statement. That locks the particular row in the STOCKS table which would cause the second session to block until the first session's transaction either commits or rolls back. The second session would then see that the stock on hand had been decreased to 0.
If you want to implement DIY optimistic locking (using the similar mechanisms as e.g. JPA uses) add a VERSION column to your stock table with data type INT initialized with zero.
The initial selectof your procedure returns not only the available quantity but also the current VERSION:
select available, version
into l_available,l_version
from stock
where product_id = i_product_code;
If the table has the required quantity you first UPDATE it, but only if the VERSION has the same value as returned from the previous query.
update stock
set available = available - i_quantity,
version = version + 1
where product_id = 1 and
/* optimistick locking */
version = l_version;
Note that 1) the update will fail if the version does not match the value you selected.
If the update succeeds you increase the version to block others from performing an concurrent UPDATE.
The last step is to check if the update was succesful and if so you process the quantity you got.
rn := SQL%rowcount;
if rn = 1 then /* success */
insert into sales_orders (thread_id, product_id,quantity,create_ts)
values(i_thread_id, i_product_code,i_quantity,current_timestamp);
commit;
end if;
If the UPDATE changed zero rows (if failed due to changed VERSION) you should either retry of return a failure.
I have query regarding performance of OF,IF and WHEN clause in oracle trigger.Consider we have below trigger-
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE ON Person
FOR EACH ROW
BEGIN
IF :new.Weight > 250 AND new:Weight > old:Weight THEN
LogWeightChange(:new.PersonId, :new.Weight, :old.Weight);
END IF;
END WeightChange;
Now if I do following changes,
like adding OF WEIGHT
-WHEN( :new.Weight > 250 AND new:Weight > old:Weight )
return Integer
Will any or all of above add to significant improvement of the trigger?
The WHEN (condition) and OF column trigger features can significantly improve trigger performance. Much of the performance penalty of triggers is the SQL and PL/SQL context switching, and by moving more logic into SQL those switches are avoided.
For example, let's start with a simple schema:
--Sample schema with 100K simple PERSON rows.
create table person
(
id number not null primary key,
name varchar2(100),
weight number
);
insert into person
select level, level, 100 from dual connect by level <= 100000;
commit;
Enable or disable different trigger features:
--Create trigger that fires for all rows.
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE ON Person
FOR EACH ROW
BEGIN
IF :new.Weight > 250 AND :new.Weight > :old.Weight THEN
null;
END IF;
END WeightChange;
/
--Create trigger that only fires for relevant rows.
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE ON Person
FOR EACH ROW
WHEN (new.Weight > 250 AND new.Weight > old.Weight)
BEGIN
null;
END WeightChange;
/
--Create trigger that fires for all updates of WEIGHT.
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE OF weight ON person
FOR EACH ROW
WHEN (new.Weight > 250 AND new.Weight > old.Weight)
BEGIN
null;
END WeightChange;
/
--No trigger.
drop trigger WeightChange;
The WHEN (condition) and OF column features can make UPDATE statements run almost twice as fast in the best case.
--With no trigger - FAST:
--0.749, 0.670, 0.733
update person set weight = 100;
rollback;
--With normal trigger - SLOW:
--1.295, 1.279, 1.264 seconds
update person set weight = 100;
rollback;
--With WHEN condition trigger - FAST:
--0.687, 0.686, 0.687
update person set weight = 100;
rollback;
--With WHEN condition, using a value that satisfies conditions - SLOW:
--1.233, 1.232, 1.233
update person set weight = 500;
rollback;
--With normal trigger, update irrelevant column - SLOW:
--1.263, 1.248, 1.248
update person set name = name;
rollback;
--With OF column trigger, update irrelevant column - FAST:
--0.624, 0.624, 0.609
update person set name = name;
rollback;
I will try to present my problem as simplified as possible.
Assume that we have 3 tables in Oracle 11g.
Persons (person_id, name, surname, status, etc )
Actions (action_id, person_id, action_value, action_date, calculated_flag)
Calculations (calculation_id, person_id,computed_value,computed_date)
What I want is for each person that meets certain criteria (let's say status=3)
I should get the sum of action_values from the Actions table where calculated_flag=0. (something like this select sum(action_value) from Actions where calculated_flag=0 and person_id=current_id).
Then I shall use that sum in a some kind of formula and update the Calculations table for that specific person_id.
update Calculations set computed_value=newvalue, computed_date=sysdate
where person_id=current_id
After that calculated_flag for participated rows will be set to 1.
update Actions set calculated_flag=1
where calculated_flag=0 and person_id=current_id
Now this can be easily done sequentially, by creating a cursor that will run through Persons table and then execute each action needed for the specific person.
(I don't provide the code for the sequential solution as the above is just an example that resembles my real-world setup.)
The problem is that we are talking about quite big amount of data and sequential approach seems like a waste of computational time.
It seems to me that this task could be performed in parallel for number of person_ids.
So the question is:
Can this kind of task be performed using parallelization in PL/SQL?
What would the solution look like? That is, what special packages (e.g. DBMS_PARALLEL_EXECUTE), keywords (e.g. bulk collect), methods should be used and in what manner?
Also, should I have any concerns about partial failure of parallel updates?
Note that I am not quite familiar with parallel programming with PL/SQL.
Thanks.
Edit 1.
Here my pseudo code for my sequential solution
procedure sequential_solution is
cursor persons_of_interest is
select person_id from persons
where status = 3;
tempvalue number;
newvalue number;
begin
for person in persons_of_interest
loop
begin
savepoint personsp;
--step 1
select sum(action_value) into tempvalue
from actions
where calculated_flag = 0
and person_id = person.person_id;
newvalue := dosomemorecalculations(tempvalue);
--step 2
update calculations set computed_value = newvalue, computed_date = sysdate
where person_id = person.person_id;
--step 3
update actions set calculated_flag = 1;
where calculated_flag = 0 and person_id = person.person_id;
--step 4 (didn't mention this step before - sorry)
insert into actions
( person_id, action_value, action_date, calculated_flag )
values
( person.person_id, 100, sysdate, 0 );
exception
when others then
rollback to personsp;
-- this call is defined with pragma AUTONOMOUS_TRANSACTION:
log_failure(person_id);
end;
end loop;
end;
Now, how would I speed up the above either with forall and bulk colletct or with parallel programming Under the following constrains:
proper memory management (taking into consideration large amount of data)
For a single person if one part of the step sequence fails - all steps should be rolled back and the failure logged.
I can propose the following. Let's say you have 1 000 000 rows in persons table, and you want to process 10 000 persons per iteration. So you can do it in this way:
declare
id_from persons.person_id%type;
id_to persons.person_id%type;
calc_date date := sysdate;
begin
for i in 1 .. 100 loop
id_from := (i - 1) * 10000;
id_to := i * 10000;
-- Updating Calculations table, errors are logged into err$_calculations table
merge into Calculations c
using (select p.person_id, sum(action_value) newvalue
from Actions a join persons p on p.person_id = a.person_id
where a.calculated_flag = 0
and p.status = 3
and p.person_id between id_from and id_to
group by p.person_id) s
on (s.person_id = c.person_id)
when matched then update
set c.computed_value = s.newvalue,
c.computed_date = calc_date
log errors into err$_calculations reject limit unlimited;
-- updating actions table only for those person_id which had no errors:
merge into actions a
using (select distinct p.person_id
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to)
on (c.person_id = p.person_id)
when matched then update
set a.calculated_flag = 1;
-- inserting list of persons for who calculations were successful
insert into actions (person_id, action_value, action_date, calculated_flag)
select distinct p.person_id, 100, calc_date, 0
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to;
commit;
end loop;
end;
How it works:
You split the data in persons table into chunks about 10000 rows (depends on gaps in numbers of ID's, max value of i * 10000 should be knowingly more than maximal person_id)
You make a calculation in the MERGE statement and update the Calculations table
LOG ERRORS clause prevents exceptions. If an error occurs, the row with the error will not be updated, but it will be inserted into a table for errors logging. The execution will not be interrupted. To create this table, execute:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG('CALCULATIONS');
end;
The table err$_calculations will be created. More information about DBMS_ERRLOG package see in the documentation.
The second MERGE statement sets calculated_flag = 1 only for rows, where no errors occured. INSERT statement inserts the these rows into actions table. These rows could be found just with the select from Calculations table.
Also, I added variables id_from and id_to to calculate ID's range to update, and the variable calc_date to make sure that all rows updated in first MERGE statement could be found later by date.
I am working on a data warehousing project, and therefore, I have been implementing some ETL Functions in Packages. I first encountered a problem on my developing laptop and thought it had something to do with my oracle installation, but now it has "spread" over to the production servers.
Two functions "sometimes" become incredible slow. We have implemented a logging system, giving us output on a logging table each x rows. When the function usually needs like 10 seconds per chunk, "sometimes" the functions needs up to 3 minutes. After rebuilding some indexes and restarting the function, it is as quick again as it used to be.
Unfortunately, I can't tell which index it is exactly, since restarting the function and building up the cursor it uses for its work takes some time and we do not have the time to check each index on its own, so I just rebuild all indexes that are potentially used by the function and restart it.
The functions that have the problem use a cursor to select data from a table with about 50 million to 200 million entries, joined by a small table with about 50-500 entries. The join condition is a string comparison. We then use the primary key from the small table we get from the join to update a foreign key on the main table. The update process is done by a forall loop, this has proven to save loads of time.
Here is a simplified version of the table structure of both tables:
CREATE TABLE "maintable"
( "pkmid" NUMBER(11,0) NOT NULL ENABLE,
"fkid" NUMBER(11,0),
"fkstring" NVARCHAR2(4) NOT NULL ENABLE,
CONSTRAINT "PK_MAINTABLE" PRIMARY KEY ("pkmid");
CREATE TABLE "smalltable"
( "pksid" NUMBER(11,0) NOT NULL ENABLE,
"pkstring" NVARCHAR2(4) NOT NULL ENABLE,
CONSTRAINT "PK_SMALLTABLE" PRIMARY KEY ("pksid");
Both tables have indexes on their string columns. Adding the primary keys, I therefore rebuild 4 indexes each time the problem happens.
We get our data in a way, that we only have the fkstring in the maintable available and the fkid is set to null. In a first step, we populate the small table. This only takes minutes and is done the following way:
INSERT INTO smalltable (pksid, pkstring)
SELECT SEQ_SMALLTABLE.NEXTVAL, fkstring
FROM
(
SELECT DISTINCT mt.fkstring
FROM maintable mt
MINUS
SELECT st.pkstring
FROM smalltable st
);
commit;
This function never causes any trouble.
The following function does (it is a simplified version of the function - I have removed logging and exception handling and renamed some variables):
function f_set_fkid return varchar2 is
cursor lCursor_MAINTABLE is
SELECT MT.PKmID, st.pksid
FROM maintable mt
JOIN smalltable st ON (mt.fkstring = st.pkstring)
WHERE mt.fkid IS NULL;
lIndex number := 0;
lExitLoop boolean := false;
type lCursorType is table of lCursor_MAINTABLE%rowtype index by pls_integer;
lCurrentRow lCursor_MAINTABLE%rowtype;
lTempDataArray lCursorType;
lCommitEvery constant number := 1000;
begin
open lCursor_MAINTABLE;
loop
-- get next row, set exit condition
fetch lCursor_MAINTABLE into lCurrentRow;
if (lCursor_MAINTABLE%notfound) then
lExitLoop := true;
end if;
-- in case of cache being full, flush cache
if ((lTempDataArray.count > 0) AND (lIndex >= lCommitEvery OR lExitLoop)) then
forall lIndex2 in lTempDataArray.FIRST..lTempDataArray.LAST
UPDATE maintable mt
set fkid = lTempDataArray(lIndex2).pksid
WHERE mt.pkmid = lTempDataArray(lIndex2).pkmid;
commit;
lTempDataArray.delete;
lIndex := 0;
end if;
-- data handling, fill cache
if (lExitLoop = false) then
lIndex := lIndex + 1;
lTempDataArray(lIndex). := lCurrentRow;
end if;
exit when lExitLoop;
end loop;
close lCursor_MAINTABLE;
return null;
end;
I would be very thankful for any help.
P.S. I do know that bulk collect into would speed up the function and probably also ease up the code a bit too, but at the moment we are content with the speed of the function it usually has. Changing the function to use bulk collect is on our plan for next year, but at the moment it is not an option (and I doubt it would solve this index problem).
If you have a table where the number of rows fluctuates wildly (as in when doing ETL loads) I would use the statistics of the fully loaded table throughout the load process.
So, generate statistics when your table is fully loaded and then use those statistics for subsequent loads.
If you use statistics from when the table is half-loaded the optimizer may be tricked into not using indexes or not using the fastest index. This is especially true if data is loaded in order so that low value, high value and density are skewed.
In your case, the statistics for columns fkstring and fkid are extra important since those two columns are heavily involved in the procedure that has performance issues.
function f_set_fkid return varchar2 is
cursor lCursor_MAINTABLE is
SELECT MT.PKmID, st.pksid
FROM maintable mt
JOIN smalltable st ON (mt.fkstring = st.pkstring)
WHERE mt.fkid IS NULL;
commit_every INTGER := 1000000;
commit_counter INTEGER :=0;
begin
for c in lCursor_MAINTABLE
loop
UPDATE maintable mt
set fkid = c.pksid
WHERE mt.pkmid = c.pkmid;
commit_counter := commit_counter+1;
if mod(commit_every,commit_counter) = 0
then
commit;
commit_counter := 0;
end if;
end loop;
return null;
end;
CREATE OR REPLACE TRIGGER "DISC_CLIENT"
BEFORE INSERT ON "PURCHASE"
FOR EACH ROW
DECLARE
checkclient PURCHASE.CLIENTNO%TYPE;
BEGIN
SELECT Clientno INTO checkclient
FROM PURCHASE
GROUP BY ClientNo
HAVING SUM(Amount)=(SELECT MAX(SUM(Amount)) FROM PURCHASE GROUP BY Clientno);
IF :new.ClientNo = checkclient
new.Amount := (:old.Amount * 0.90);
END IF;
END;
/
Seem to be having a problem with this trigger. I know there I cant use the WHEN() clause for subqueries so im hoping this would work but it doesnt! Ideas anyone? :/
Basically im trying to get this trigger to apply a discount to the amount value before inserting if the client matches the top client! : )
There's a non-pretty but easy way round this, create a view and update that. You can then explicitly state all the columns in your trigger and put them in the table. You'd also be much better off creating a 1 row 2 column table, max_amount and then inserting the maximum amount and clientno into that each time. You should also really have a discounted amount column in the purchase table, as you ought to know who you've given discounts to. The amount charged is then amount - discount. This get's around both the mutating table and being unable to update :new.amount as well as making your queries much, much faster. As it stands you don't actually apply a discount if the current transaction is the highest, only if the client has placed the previous highest, so I've written it like that.
create or replace view purchase_view as
select *
from purchase;
CREATE OR REPLACE TRIGGER TR_PURCHASE_INSERT
BEFORE INSERT ON PURCHASE_VIEW
FOR EACH ROW
DECLARE
checkclient max_amount.clientno%type;
checkamount max_amount.amount%type;
discount purchase.discount%type;
BEGIN
SELECT clientno, amount
INTO checkclient, checkamount
FROM max_amount;
IF :new.clientno = checkclient then
discount := 0.1 * :new.amount;
ELSIF :new.amount > checkamount then
update max_amount
set clientno = :new.clientno
, maxamount = :new.amount
;
END IF;
-- Don-t specify columns so it breaks if you change
-- the table and not the trigger
insert into purchase
values ( :new.clientno
, :new.amount
, discount
, :new.other_column );
END TR_PURCHASE_INSERT;
/
As I remember a trigger can't select from a table it's fired for.
Otherwise you'll get ORA-04091: table XXXX is mutating, trigger/function may not see it. Tom advises us not to put too much logic into triggers.
And if I understand your query, it should be like this:
SELECT Clientno INTO checkclient
FROM PURCHASE
GROUP BY ClientNo
HAVING SUM(Amount)=(select max (sum_amount) from (SELECT SUM(Amount) as sum_amount FROM PURCHASE GROUP BY Clientno));
This way it will return the client who spent the most money.
But I think it's better to do it this way:
select ClientNo
from (
select ClientNo, sum (Amount) as sum_amount
from PURCHASE
group by ClientNo)
order by sum_amount
where rownum