Oracle Trigger Subquery problem - oracle

CREATE OR REPLACE TRIGGER "DISC_CLIENT"
BEFORE INSERT ON "PURCHASE"
FOR EACH ROW
DECLARE
checkclient PURCHASE.CLIENTNO%TYPE;
BEGIN
SELECT Clientno INTO checkclient
FROM PURCHASE
GROUP BY ClientNo
HAVING SUM(Amount)=(SELECT MAX(SUM(Amount)) FROM PURCHASE GROUP BY Clientno);
IF :new.ClientNo = checkclient
new.Amount := (:old.Amount * 0.90);
END IF;
END;
/
Seem to be having a problem with this trigger. I know there I cant use the WHEN() clause for subqueries so im hoping this would work but it doesnt! Ideas anyone? :/
Basically im trying to get this trigger to apply a discount to the amount value before inserting if the client matches the top client! : )

There's a non-pretty but easy way round this, create a view and update that. You can then explicitly state all the columns in your trigger and put them in the table. You'd also be much better off creating a 1 row 2 column table, max_amount and then inserting the maximum amount and clientno into that each time. You should also really have a discounted amount column in the purchase table, as you ought to know who you've given discounts to. The amount charged is then amount - discount. This get's around both the mutating table and being unable to update :new.amount as well as making your queries much, much faster. As it stands you don't actually apply a discount if the current transaction is the highest, only if the client has placed the previous highest, so I've written it like that.
create or replace view purchase_view as
select *
from purchase;
CREATE OR REPLACE TRIGGER TR_PURCHASE_INSERT
BEFORE INSERT ON PURCHASE_VIEW
FOR EACH ROW
DECLARE
checkclient max_amount.clientno%type;
checkamount max_amount.amount%type;
discount purchase.discount%type;
BEGIN
SELECT clientno, amount
INTO checkclient, checkamount
FROM max_amount;
IF :new.clientno = checkclient then
discount := 0.1 * :new.amount;
ELSIF :new.amount > checkamount then
update max_amount
set clientno = :new.clientno
, maxamount = :new.amount
;
END IF;
-- Don-t specify columns so it breaks if you change
-- the table and not the trigger
insert into purchase
values ( :new.clientno
, :new.amount
, discount
, :new.other_column );
END TR_PURCHASE_INSERT;
/

As I remember a trigger can't select from a table it's fired for.
Otherwise you'll get ORA-04091: table XXXX is mutating, trigger/function may not see it. Tom advises us not to put too much logic into triggers.
And if I understand your query, it should be like this:
SELECT Clientno INTO checkclient
FROM PURCHASE
GROUP BY ClientNo
HAVING SUM(Amount)=(select max (sum_amount) from (SELECT SUM(Amount) as sum_amount FROM PURCHASE GROUP BY Clientno));
This way it will return the client who spent the most money.
But I think it's better to do it this way:
select ClientNo
from (
select ClientNo, sum (Amount) as sum_amount
from PURCHASE
group by ClientNo)
order by sum_amount
where rownum

Related

Cursor with iteration to grab data from last row PL/SQL

I have a test script that I'm beggining to play with. I'm getting stuck with something that seems simple.
I want to iterate through rows to fetch data from last row of result set to use only it.
procedure e_test_send
is
cursor get_rec is
select
id,
email_from,
email_to,
email_cc,
email_subject,
email_message
from test_email_tab;
begin
for rec_ in get_rec loop
ifsapp.send_email_api.send_html_email(rec_.email_to,rec_.email_from, rec_.email_subject, rec_.email_message);
end loop;
end e_test_send;
All I'm trying to do is send an email with a message and to a person from the last row only. This is a sample table that will grow in records. At the minute I have 2 rows of data in it, if I execute this procedure it will send 2 emails which is not the desired action.
I hope this makes sense.
Thanks
Do you know which row is the last row? The one with the MAX(ID) value? If so, then you could base cursor on a straightforward
SELECT id,
email_from,
email_to,
email_cc,
email_subject,
email_message
FROM test_email_tab
WHERE id = (SELECT MAX (id) FROM test_email_tab)
As it scans the same table twice, its performance will drop as number of rows gets higher and higher. In that case, consider
WITH
temp
AS
(SELECT id,
email_from,
email_to,
email_cc,
email_subject,
email_message,
ROW_NUMBER () OVER (ORDER BY id DESC) rn
FROM test_email_tab)
SELECT t.id,
t.email_from,
t.email_to,
t.email_cc,
t.email_subject,
t.email_message
FROM temp t
WHERE t.rn = 1
which does it only once; sorts rows by ID in descending order and returns the one that ranks as the "highest" (i.e. the last).

oracle trigger exact fetch returns more than requested number of rows

I am trying to get the Quantity from the transaction table. Try to get the quantity of the sell and quantity of buy. Use Portfolio_Number, Stock_Code, Buy_Sell to verify the quantity.
Transaction Table (Portfolio_Number, Transaction_Date,
Stock_Code, Exchange_Code, Broker_Number, Buy_Sell, Quantity, Price_Per_Share)
create or replace trigger TR_Q5
before Insert on
Transaction
for each row
declare
V_quantityB number(7,0);
V_quantityS number(7,0);
begin
if(:new.buy_sell ='S') then
select quantity
into V_quantityS
from transaction
where :new.portfolio_number = portfolio_number
and :new.stock_code = stock_code
and buy_sell='S'
;
if V_quantityS>=1 then
Raise_Application_Error(-20020, 'not S');
end if;
end if;
try to insert
INSERT INTO Transaction
(Portfolio_Number, Transaction_Date, Stock_Code, Exchange_Code, Broker_Number, Buy_Sell, Quantity, Price_Per_Share)
values
(500, To_Date('09-Feb-2020 16:41:00', 'DD-Mon-YYYY HH24:MI:SS'), 'IBM', 'TSX', 4, 'S', 10000, 25.55 );
but it shows up the error
exact fetch returns more than requested number of rows
The error you mentioned is self-explanatory. select you wrote should return just 1 row, but it returns more than that. As you can't put several rows into a scalar number variable, you got the error.
What would fix it? For example, aggregation:
select sum(quantity)
into V_quantityS
...
or perhaps
select distinct quantity
or even
select quantity
...
where rownum = 1
However, beware: trigger is created on the transaction table, and you are selecting from it at the same time which leads to mutating table error. What to do with that? Use a compound trigger.

Sequential vs parallel solution

I will try to present my problem as simplified as possible.
Assume that we have 3 tables in Oracle 11g.
Persons (person_id, name, surname, status, etc )
Actions (action_id, person_id, action_value, action_date, calculated_flag)
Calculations (calculation_id, person_id,computed_value,computed_date)
What I want is for each person that meets certain criteria (let's say status=3)
I should get the sum of action_values from the Actions table where calculated_flag=0. (something like this select sum(action_value) from Actions where calculated_flag=0 and person_id=current_id).
Then I shall use that sum in a some kind of formula and update the Calculations table for that specific person_id.
update Calculations set computed_value=newvalue, computed_date=sysdate
where person_id=current_id
After that calculated_flag for participated rows will be set to 1.
update Actions set calculated_flag=1
where calculated_flag=0 and person_id=current_id
Now this can be easily done sequentially, by creating a cursor that will run through Persons table and then execute each action needed for the specific person.
(I don't provide the code for the sequential solution as the above is just an example that resembles my real-world setup.)
The problem is that we are talking about quite big amount of data and sequential approach seems like a waste of computational time.
It seems to me that this task could be performed in parallel for number of person_ids.
So the question is:
Can this kind of task be performed using parallelization in PL/SQL?
What would the solution look like? That is, what special packages (e.g. DBMS_PARALLEL_EXECUTE), keywords (e.g. bulk collect), methods should be used and in what manner?
Also, should I have any concerns about partial failure of parallel updates?
Note that I am not quite familiar with parallel programming with PL/SQL.
Thanks.
Edit 1.
Here my pseudo code for my sequential solution
procedure sequential_solution is
cursor persons_of_interest is
select person_id from persons
where status = 3;
tempvalue number;
newvalue number;
begin
for person in persons_of_interest
loop
begin
savepoint personsp;
--step 1
select sum(action_value) into tempvalue
from actions
where calculated_flag = 0
and person_id = person.person_id;
newvalue := dosomemorecalculations(tempvalue);
--step 2
update calculations set computed_value = newvalue, computed_date = sysdate
where person_id = person.person_id;
--step 3
update actions set calculated_flag = 1;
where calculated_flag = 0 and person_id = person.person_id;
--step 4 (didn't mention this step before - sorry)
insert into actions
( person_id, action_value, action_date, calculated_flag )
values
( person.person_id, 100, sysdate, 0 );
exception
when others then
rollback to personsp;
-- this call is defined with pragma AUTONOMOUS_TRANSACTION:
log_failure(person_id);
end;
end loop;
end;
Now, how would I speed up the above either with forall and bulk colletct or with parallel programming Under the following constrains:
proper memory management (taking into consideration large amount of data)
For a single person if one part of the step sequence fails - all steps should be rolled back and the failure logged.
I can propose the following. Let's say you have 1 000 000 rows in persons table, and you want to process 10 000 persons per iteration. So you can do it in this way:
declare
id_from persons.person_id%type;
id_to persons.person_id%type;
calc_date date := sysdate;
begin
for i in 1 .. 100 loop
id_from := (i - 1) * 10000;
id_to := i * 10000;
-- Updating Calculations table, errors are logged into err$_calculations table
merge into Calculations c
using (select p.person_id, sum(action_value) newvalue
from Actions a join persons p on p.person_id = a.person_id
where a.calculated_flag = 0
and p.status = 3
and p.person_id between id_from and id_to
group by p.person_id) s
on (s.person_id = c.person_id)
when matched then update
set c.computed_value = s.newvalue,
c.computed_date = calc_date
log errors into err$_calculations reject limit unlimited;
-- updating actions table only for those person_id which had no errors:
merge into actions a
using (select distinct p.person_id
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to)
on (c.person_id = p.person_id)
when matched then update
set a.calculated_flag = 1;
-- inserting list of persons for who calculations were successful
insert into actions (person_id, action_value, action_date, calculated_flag)
select distinct p.person_id, 100, calc_date, 0
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to;
commit;
end loop;
end;
How it works:
You split the data in persons table into chunks about 10000 rows (depends on gaps in numbers of ID's, max value of i * 10000 should be knowingly more than maximal person_id)
You make a calculation in the MERGE statement and update the Calculations table
LOG ERRORS clause prevents exceptions. If an error occurs, the row with the error will not be updated, but it will be inserted into a table for errors logging. The execution will not be interrupted. To create this table, execute:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG('CALCULATIONS');
end;
The table err$_calculations will be created. More information about DBMS_ERRLOG package see in the documentation.
The second MERGE statement sets calculated_flag = 1 only for rows, where no errors occured. INSERT statement inserts the these rows into actions table. These rows could be found just with the select from Calculations table.
Also, I added variables id_from and id_to to calculate ID's range to update, and the variable calc_date to make sure that all rows updated in first MERGE statement could be found later by date.

Trigger to calculate subtotal

I've been trying to implement this trigger for a while now and am making progress (I think!) but now I am getting a mutation error.
What I have here is three entities (that are relevant here), Customer_Order(total etc), Order_Line(quantity, subtotal etc) and Products(stock, price). Order_line is a link entity and so a product can be in many order_lines and a customer_order can have many order_lines, but an order_line can only appear once in an order and can only contain one product. The purpose of the trigger is to take the subtotal from order_line(or price from products I think actually) and the quantity from order_line, multiply them and update the new order_line's subtotal.
So I insert an order_line with my product foreign key, quantity of 3 and price of 4.00, the trigger multiplies the two to equal 12 and updates the subtotal. Now, I am thinking it's right to use price here instead of Order_line's subtotal in order to fix the mutation error (which occurs because I am asking the trigger to update the table which is being accessed by the triggering statement, right?), but how do I fix the quantity issue? Quantity won't always be the same value as stock, it has to be less than or equal to stock, so does anyone know how I can fix this to select from product and update order_line? Thanks.
CREATE OR REPLACE TRIGGER create_subtotal
BEFORE INSERT OR UPDATE ON Order_Line
for each row
DECLARE
currentSubTotal order_line.subtotal%type;
currentQuantity order_line.quantity%type;
BEGIN
select order_line.subtotal,order_line.quantity
into currentSubTotal,currentQuantity
from order_line
where product_no = :new.product_no;
IF (currentquantity>-1 ) then
update order_line set subtotal= currentSubTotal * currentQuantity where line_no=:new.line_no;
END IF;
END;
.
run
EDIT: I think I could use the :new syntax to use the quantity value from the triggering statement. I'll try this but I'd appreciate confirmation and help still, thanks.
It sounds like you want something like
CREATE OR REPLACE TRIGGER create_subtotal
BEFORE INSERT OR UPDATE ON order_line
FOR EACH ROW
DECLARE
l_price products.price%type;
BEGIN
SELECT price
INTO l_price
FROM products
WHERE product_no = :new.product_no;
IF( :new.quantity > -1 )
THEN
:new.subtotal := :new.quantity * l_price;
END IF;
END;
If this is something other than homework, however, it doesn't really make sense to pull the price from the PRODUCTS table in this trigger. Presumably, a product's price will change over time. But the price is fixed for a particular order when the order is placed. If the trigger was only defined on INSERT, it would probably be reasonable to just fetch the current price. But if you want to recalculate the subtotal of the line when a row is updated, you'd need to fetch the price as of the time the order was placed (and that assumes that you don't charge different customers different prices at the same time).
From a normalization standpoint, it also tends not to make sense to store calculated fields in the first place. It would make more sense to store the quantity and the price in the order_line table and then calculate the subtotal for the line in a view (or, if you're using 11g, as a virtual column in the table).
The mutation error does not occur because you are updating the table; it occurs because you are querying from the table that is already being updated.
If I'm understanding correctly what you want to do:
CREATE OR REPLACE TRIGGER create_subtotal
BEFORE INSERT OR UPDATE ON Order_Line
for each row
DECLARE
currentPrice products.price%TYPE;
BEGIN
-- Get the current price for the product
SELECT price INTO currentPrice FROM products WHERE product_no = :new.product_no;
-- Set the new subtotal to the current price multiplied by the order quantity
:new.subtotal := currentPrice * :new.quantity;
END;
/
(I'm unclear why you have a test for a quantity below 0, and what you want to occur in this case. If you want to set the subtotal to NULL or 0 in this case, it should be quite easy to modify the above.)

How to? Correct sql syntax for finding the next available identifier

I think I could use some help here from more experienced users...
I have an integer field name in a table, let's call it SO_ID in a table SO, and to each new row I need to calculate a new SO_ID based on the following rules
1) SO_ID consists of 6 letters where first 3 are an area code, and the last three is the sequenced number within this area.
309001
309002
309003
2) so the next new row will have a SO_ID of value
309004
3) if someone deletes the row with SO_ID value = 309002, then the next new row must recycle this value, so the next new row has got to have the SO_ID of value
309002
can anyone please provide me with either a SQL function or PL/SQL (perhaps a trigger straightaway?) function that would return the next available SO_ID I need to use ?
I reckon I could get use of keyword rownum in my sql, but the follwoing just doens't work properly
select max(so_id),max(rownum) from(
select (so_id),rownum,cast(substr(cast(so_id as varchar(6)),4,3) as int) from SO
where length(so_id)=6
and substr(cast(so_id as varchar(6)),1,3)='309'
and cast(substr(cast(so_id as varchar(6)),4,3) as int)=rownum
order by so_id
);
thank you for all your help!
This kind of logic is fraught with peril. What if two sessions calculate the same "next" value, or both try to reuse the same "deleted" value? Since your column is an integer, you'd probably be better off querying "between 309001 and 309999", but that begs the question of what happens when you hit the thousandth item in area 309?
Is it possible to make SO_ID a foreign key to another table as well as a unique key? You could pre-populate the parent table with all valid IDs (or use a function to generate them as needed), and then it would be a simple matter to select the lowest one where a child record doesn't exist.
well, we came up with this... sort of works.. concurrency is 'solved' via unique constraint
select min(lastnumber)
from
(
select so_id,so_id-LAG(so_id, 1, so_id) OVER (ORDER BY so_id) AS diff,LAG(so_id, 1, so_id) OVER (ORDER BY so_id)as lastnumber
from so_miso
where substr(cast(so_id as varchar(6)),1,3)='309'
and length(so_id)=6
order by so_id
)a
where diff>1;
Do you really need to compute & store this value at the time a row is inserted? You would normally be better off storing the area code and a date in a table and computing the SO_ID in a view, i.e.
SELECT area_code ||
LPAD( DENSE_RANK() OVER( PARTITION BY area_code
ORDER BY date_column ),
3,
'0' ) AS so_id,
<<other columns>>
FROM your_table
or having a process that runs periodically (nightly, for example) to assign the SO_ID using similar logic.
If your application is not pure sql, you could do this in application code (ie: Java code). This would be more straightforward.
If you are recycling numbers when rows are deleted, your base table must be consulted when generating the next number. "Legacy" pre-relational schemes that attempt to encode information in numbers are a pain to make airtight when numbers must be recycled after deletes, as you say yours must.
If you want to avoid having to scan your table looking for gaps, an after-delete routine must write the deleted number to a separate table in a "ReuseMe" column. The insert routine does this:
begins trans
selects next-number table for update
uses a reuseme number if available else uses the next number
clears the reuseme number if applicable or increments the next-number in the next-number table
commits trans
Ignoring the issues about concurrency, the following should give a decent start.
If 'traffic' on the table is low enough, go with locking the table in exclusive mode for the duration of the transaction.
create table blah (soc_id number(6));
insert into blah select 309000 + rownum from user_tables;
delete from blah where soc_id = 309003;
commit;
create or replace function get_next (i_soc in number) return number is
v_min number := i_soc* 1000;
v_max number := v_min + 999;
begin
lock table blah in exclusive mode;
select min(rn) into v_min
from
(select rownum rn from dual connect by level <= 999
minus
select to_number(substr(soc_id,4))
from blah
where soc_id between v_min and v_max);
return v_min;
end;

Resources