Trigger created with compilation errors for Oracle SQL - oracle

I tried to create code using Oracle trigger syntax, but there is a warning:
trigger created with compilation errors
Here is the code. I want to raise an error when the price in table itemtype is set to over four times the minimum old price. How can I do that?
I tried to fix the previous problem! I created the trigger successfully, but it only works when insert, the "mutating table" error still occurs when I was tring to update the price on table itemType. How can I change my code to make it also works when updating?
CREATE or replace TRIGGER tr_price before insert or update on itemType for each row
declare minimum float;
begin
select min(price)
into minimum
from itemType;
if :new.price > 4*minimum
then
raise_application_error (-20000,'new price can not over 4 times min old price');
END IF;
end;
/

As clarified in the Comments (or, if you answer my request, as edited in the post itself), you are running into the "mutating table" error. And you would like to know how to fix it.
Alas, this is not a programming error. It is a logical problem (which Oracle chose not to ignore, by rising the "mutating table" error).
Oracle is a multi-user environment, and SQL allows you to insert/alter/delete many rows in one transaction. Both of these things are GOOD, but they mean you can't do things like what you are trying to do.
What is the "old" minimum price? Suppose you had 20 rows already, and the minimum price was $55. You add one more row with the price $50. Now you try to add another row with price $210. What is "the old minimum"? $220? Why, because the row with the price of $50 was inserted, but not committed yet? Or is it $200? What if, in the same transaction, but later in your code, you DELETE the row with the $55 price and the next lowest is $60 - shouldn't you allow prices up to $240?
Then compound this problem: you insert a few rows, but you don't commit yet. Then someone else inserts (or updates or deletes) in the same table, and they commit their transaction. Now you want to commit. Shouldn't the "check" be performed again, at the end of the transaction, and not "for each row"?
The whole idea of tying the behavior of DML statements to data in the table, as it exists at one time or another (and as may change in the middle of your transaction, as caused either by your own other activity or by the activity of other users at the same time), is something you must understand clearly, and discuss with the "business managers" or "business users" who may not know or understand SQL, but who must understand this issue and who must come up with logical requirements. Only allowing inserts where the price is "no more than four times the old minimum" is not internally consistent, and Oracle, wisely, does not allow you to do something that wouldn't make sense.
So - the short answer is "you can't fix the solution, because the solution is not broken, the problem itself is broken." You must fix the problem first.

Related

When does Oracle sql exclusively lock a row in an update statement?

I'm trying to see whether I can use database lock to deal with race conditions. For example
CREATE TABLE ORDER
(
T1_ID NUMBER PRIMARY KEY,
AMT NUMBER,
STATUS1 CHAR(1),
STATUS2 CHAR(1),
UPDATED_BY VARCHAR(25)
);
insert into order values (order_seq.nextval, 1, 'N', 'N', 'U0');
Later two users can update the order record at the same time. Requirement is that only one can proceed while the other should NOT. We can certainly use a distributed lock manager (DLM) to do this but I figure database lock may be more efficient.
User 1:
update T1 set status1='Y', updated_by='U1' where status1='N';
User 2:
update T1 set status2='Y', updated_by='U2' where status1='N';
Two users are doing these at the same time. Ideally only one should be allowed to proceed. I played using Sql Plus and also wrote a little java test program letting two threads do these simultaneously. I got the same result. Let's say User 1 got the DB row lock first. It returns 1 row updated. The second session will be blocked waiting for the row lock before the 1st session commits or rollbacks. The question is REALLY this:
Update with a where clause seems like two operations: first it will do an implicit select based on the where clause to pick the row that will be updated. Since Oracle only supports READ COMMITTED isolation level, I expect both UPDATE statements will pick the single record in the DB. As a result, I expected both UPDATE statement will eventually return "1 row updated" although one will wait till the other transaction commits. HOWEVER that's not what I saw. The second UPDATE returns "0 row updated" after the first commits. I feel that Oracle actually runs the where clause AGAIN after the first session commits, which results in "0 row updated" result.
This is strange to me. I thought I would run into the classical "lost update" phenomenon.
can somebody please explain what's going on here? Thanks very much!

Oracle Forms popup window appears multiple times

In Oracle Forms 10g, I have the following code in WHEN-VALIDATE-RECORD trigger.
if(some_condition > 0) then
message('test');
RAISE FORM_TRIGGER_FAILURE;
end if;
Problem is message('test'); appears multiple times. How can I make sure it appears only once.
The trigger WHEN-VALIDATE-RECORD will go off for the record that needs to be validated after leaving the record or pressing commit.
In your case I assume the message appears after a commit and you changed all your rows or at least more then one in for example the post-query trigger.
Because more then one row is changed the trigger will fire for all of these rows and you will get the message multiple times.
Try just after you query your records without changing anything to commit.
It should say there is nothing changed to commit. If it just commit for example 10 rows then this is your problem.

Populate a column on update (create too?), and why "FOR EACH ROW"?

I have a table of people who belong to various sites. These sites can change, but don't very often. So when we create an attendance record (a learner_session object) we don't store the site. But this has cause a problem in reporting how many training hours a site has, because some people have changed sites over the years. Not by much, but we'd like to get this right.
So I've added a site_at_the_time column to the learner_session table. I want to auto-populate this with the site the person was at when they attended the session. But I'm not sure how to reference this. For some reason (I'm guessing to speed development or something) the learner_id is allowed to be null. So I'm currently planning to do an update trigger. The learner_id shouldn't ever get updated, and if it ever did somehow, the entire record would be junk so I'm not worried about it overwriting it.
The trigger I have now is
create trigger set_site_at_the_time
after update of learner_id on lrn_session
begin
:new.site_at_the_time:= (select site_id from learner who where :new.learner_id = who.learner_id);
end;
which leads me to the following error:
ORA-04082: NEW or OLD references not allowed in table level triggers
Now, I've done some research and found I need to use a FOR EACH ROW - and I'm wondering what exactly this FOR EACH ROW does - is it every row captured by the trigger? Or is it every row in the table?
Also, will this trigger when I create a record too? So if I do insert into learner_session(id,learner_id,...) values(learner_session_id_seq.nextval,1234,...) will this capture that appropriately?
And while I'm here, I might as well see if there's something else I'm doing wrong with this trigger. But I'm mainly asking to figure out what the FOR EACH ROW is supposed to do and if it triggers properly. =)
FOR EACH ROW means that the trigger will fire once for each row that is updated by your SQL statement. Without this clause, the trigger will only fire once, no matter how many rows are affected. If you want to change values as they're being inserted, you have to use FOR EACH ROW, because otherwise the trigger can't know which :new and :old values to use.
As written, the trigger only fires on update. To make it also fire upon insert, you'd need to change the definition:
CREATE TRIGGER set_site_at_the_time
BEFORE INSERT OR UPDATE OF learner_id
ON lrn_session
FOR EACH ROW
BEGIN
SELECT site_id into :new.site_at_the_time
FROM learner who
WHERE :new.learner_id = who.learner_id);
END set_site_at_the_time;

Oracle: difference between max(id)+1 and sequence.nextval

I am using Oracle
What is difference when we create ID using max(id)+1 and using sequance.nexval,where to use and when?
Like:
insert into student (id,name) values (select max(id)+1 from student, 'abc');
and
insert into student (id,name) values (SQ_STUDENT.nextval, 'abc');
SQ_STUDENT.nextval sometime gives error that duplicate record...
please help me on this doubt
With the select max(id) + 1 approach, two sessions inserting simultaneously will see the same current max ID from the table, and both insert the same new ID value. The only way to use this safely is to lock the table before starting the transaction, which is painful and serialises the transactions. (And as Stijn points out, values can be reused if the highest record is deleted). Basically, never use this approach. (There may very occasionally be a compelling reason to do so, but I'm not sure I've ever seen one).
The sequence guarantees that the two sessions will get different values, and no serialisation is needed. It will perform better and be safer, easier to code and easier to maintain.
The only way you can get duplicate errors using the sequence is if records already exist in the table with IDs above the sequence value, or if something is still inserting records without using the sequence. So if you had an existing table with manually entered IDs, say 1 to 10, and you created a sequence with a default start-with value of 1, the first insert using the sequence would try to insert an ID of 1 - which already exists. After trying that 10 times the sequence would give you 11, which would work. If you then used the max-ID approach to do the next insert that would use 12, but the sequence would still be on 11 and would also give you 12 next time you called nextval.
The sequence and table are not related. The sequence is not automatically updated if a manually-generated ID value is inserted into the table, so the two approaches don't mix. (Among other things, the same sequence can be used to generate IDs for multiple tables, as mentioned in the docs).
If you're changing from a manual approach to a sequence approach, you need to make sure the sequence is created with a start-with value that is higher than all existing IDs in the table, and that everything that does an insert uses the sequence only in the future.
Using a sequence works if you intend to have multiple users. Using a max does not.
If you do a max(id) + 1 and you allow multiple users, then multiple sessions that are both operating at the same time will regularly see the same max and, thus, will generate the same new key. Assuming you've configured your constraints correctly, that will generate an error that you'll have to handle. You'll handle it by retrying the INSERT which may fail again and again if other sessions block you before your session retries but that's a lot of extra code for every INSERT operation.
It will also serialize your code. If I insert a new row in my session and go off to lunch before I remember to commit (or my client application crashes before I can commit), every other user will be prevented from inserting a new row until I get back and commit or the DBA kills my session, forcing a reboot.
To add to the other answers, a couple of issues.
Your max(id)+1 syntax will also fail if there are no rows in the table already, so use:
Coalesce(Max(id),0) + 1
There's nothing wrong with this technique if you only have a single process that inserts into the table, as might be the case with a data warehouse load, and if max(id) is fast (which it probably is).
It also avoids the need for code to synchronise values between tables and sequences if you are moving restoring data to a test system, for example.
You can extend this method to multirow insert by using:
Coalesce(max(id),0) + rownum
I expect that might serialise a parallel insert, though.
Some techniques don't work well with these methods. They rely of course on being able to issue the select statement, so SQL*Loader might be ruled out. However SQL*Loader has support for this technique in general through the SEQUENCE parameter of the column specification: http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_field_list.htm#i1008234
Assuming MAX(ID) is actually fast enough, wouldn't it be possible to:
First get MAX(ID)+1
Then get NEXTVAL
Compare those two and increase sequence in case NEXTVAL is smaller then MAX(ID)+1
Use NEXTVAL in INSERT statement
In that case I would have a fully stable procedure and manual inserts would also be allowed without worrying about updating the sequence

Oracle Apex - Updating a view with instead-of trigger

Apex beginner here. I have a view in my Oracle database of the form:
create or replace view vw_awkward_view as
select unique tab1.some_column1,
tab2.some_column1,
tab2.some_column2,
tab2.some_column3
from table_1 tab1,
table_2 tab2
WHERE ....
I need the 'unique' clause on 'tab1.some_column1' because it has many entries in its underlying table. I also need to include 'tab1.some_column1' in my view because the rest of the data doesn't make much sense without it.
In Apex, I want to create a report on this view with a form for editing it (update only). I do NOT need to edit tab1.some_column1. Only the other columns in the view need to be editable. I can normally achieve this using an 'instead-of' trigger, but this doesn't look possible when the view contains a 'distinct', 'unique' or 'group by' clause.
If I try to update a row on this view I get the following error:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
How can I avoid this error? I want my 'instead-of' trigger to kick in and perform the update and I don't need to edit the column which has the 'unique' clause, so I think it should be possible to do this.
I think that you should be able to remove the "unique".
if tab2.some_column1, tab2.some_column2, tab2.some_column3 are not unique, then how do you want to update them ?
if they are unique then the whole result: tab1.some_column1, tab2.some_column1, tab2.some_column2, tab2.some_column3 is unique.
When you state in a sql query "unique" or "distinct" it's for all columns not only 'tab1.some_column1'
Hope i'm in the correct direction of your question here ;)
Your query could be achieved by doing something like:
select a.some_column1, tab2.some_column1, tab2.some_column2, tab2.some_column3
from table_2 tab2
join (select distinct some_column1 from table_1) a
on tab2.column_in_tab1 = a.some_column1
The reason you get the ORA-02014 error is because of the automatically generated ApplyMRU process. This process will attempt to lock a (the) changed row(s):
begin
for r in (select ...
from vw_awkward_view
where <your first defined PK column>= 'value for PK1'
for update nowait)
loop
null;
end loop;
end;
That's a bummer, and means you won't be able to use the generated process. You'll have to write your own process which does the updating.
For this, you'll have to use the F## arrays in apex_application.
If this sounds totally unfamiliar, take a look at:
Custom submit process, and on using the apex_application arrays.
Also, here is a how-to for apex from 2004 from Oracle itself. It still uses lots of htmldb references, but the gist of it is there.
(it might be a good idea to use the apex_item interface to build up your form, and have control over what is generated and what array it takes.)
What it comes down to is: loop over the array containing your items and do an UPDATE on your view with the submitted values.
Of course, you don't have locking this way, nor a way to prevent unnecessary updates.
Locking you can do yourself, with for example using the select for update method. You'd have to lock the correct rows in the table(s) you want to alter, before you update them. If the locking fails, then your process should fail.
As for the 'lost update' story: here you'd need to check the MD5-checksums. A checksum is generated from the editable columns in your form and put in the html-code. On submit, this checksum is then compared to a newly generated checksum from those same columns, but with values from the database at that time of submit. If the checksums differ, it means the record has changed between the page load and the page submit. Your process should fail because the record has been altered, and you don't want to have those overwritten. (if you go the apex_item way, then don't forget to include an MD5_CHECKSUM call (or MD5_HIDDEN).
Important note though: checksums generated by either using apex_item or simply the standard form functionality build up a string to be hashed. As you can see in apex_item.md5_hidden, checksums are generated using DBMS_OBFUSCATION_TOOLKIT.MD5.
You can get the checksum of the values in the DB in 2 ways: wwv_flow_item.md5 or using dbms_obfuscation.
However, what the documentation fails to mention is this: OTN Apex discussion on MD5 checksums. Pipes are added in the generated checksums! Don't forget this, or it'll blow up in your face and you'll be left wondering for days what the hell is wrong with it.
Example:
select utl_raw.cast_to_raw(dbms_obfuscation_toolkit.md5(input_string=>
"COLUMN1" ||'|'||
"COLUMN2" ||'|'||
"COLUMN5" ||'|'||
"COLUMN7" ||'|'||
"COLUMN10" ||'|'||
"COLUMN12" ||'|'||
"COLUMN14" ||
'|||||||||||||||||||||||||||||||||||||||||||'
)) md5
from some_table
To get the checksum of a row of the some_table table, where columns 1,2,5,7,10,12,14 are editable!
In the end, this is how it should be structured:
loop over array
generate a checksum for the current value of the editable columns
from the database
compare this checksum with the submitted checksum
(apex_application.g_fcs if generated) if the checksums match,
proceed with update. If not, fail process here.
lock the correct records for updating. Specify nowait, and it
locking fails, fail the process
update your view with the submitted values. Your instead-of trigger
will fire. Be sure you use correct values for your update statement so that only this one record will be updated
Don't commit inbetween. It's either all or nothing.
I almost feel like i went overboard, and it might feel like it is all a bit much, but when you know the pitfalls it's actually not so hard to pull this custom process off! It was very knowledgable for me to play with it :p
The answer by Tom is a correct way of dealing with ths issue but I think overkill for your requirements if I understand correctly.
The easiest way may be to create a form on the table you want to edit. Then have the report edit link take the user to this form which will only update the needed columns from the one table. If you need the value of the column from the other table displayed it is simple when you create the link to pass this value to the form which can contain a display only item to show this.

Resources