Oracle SQL: PLS-00049 Bad Bind variable when selecting NEXTVAL from a sequence - oracle

So, I'm trying to use JDBC to access my Oracle DB, and I found out that, for the functions in JDBC to return results correctly, I need to make an iterator for my tables. So, after searching around and figuring out what that means, I came up with the following code snippet to get that done:
--create a sequence for use in the trigger
CREATE SEQUENCE accounts_seq;
--make the trigger on insert or update
CREATE OR REPLACE TRIGGER account_pk_trig
BEFORE INSERT OR UPDATE ON accounts
FOR EACH ROW
BEGIN
IF inserting THEN
SELECT : accounts_seq.NEXTVAL INTO : NEW.accountnumber FROM dual;
ELSE IF updating THEN
SELECT : OLD.accountnumber INTO : NEW.accountnumber FROM dual;
END IF;
END IF;
END;
/
And, not only is Oracle SQL Developer putting the dreaded red underline of doom in the space after the semicolon put after end, but also on the forward slash to end the code block. As far as I've seen, this appears to be correct to the Oracle SQL examples of trigger definitions that I've seen... and I'm not sure if this is due to the Oracle SQL Developer not recognizing NEXTVAL as a keyword... because it isn't highlighted like the others are.
After some fiddling around, I realized that the "ELSE IF" opened a new IF statement that I didn't close. But, still getting Bad Bind variable error.
For those of you who would want to make sure that the "accountnumber" field exists in the table "accounts", here's my definition for the "accounts" table.
CREATE TABLE accounts (
accountnumber NUMBER NOT NULL,
routingnumber NUMBER NOT NULL,
acctype VARCHAR2(20),
balance NUMBER (*,2),
ownerid NUMBER,
CONSTRAINT accountnumber_pk PRIMARY KEY (accountnumber)
);

You have two major errors in your PL/SQL code:
First the select : is wrong. You can't just throw in a colon like that. The NEW and OLD records do need a colon, but without a space. :new, not : new.
To store the result of a query in a variable you need:
select accounts_seq.NEXTVAL
INTO :NEW.accountnumber
FROM dual;
But you don't need a SELECT for that, you can use a simple variable assignment:
:NEW.accountnumber := accounts_seq.NEXTVAL;
You also have two END IFs although you only have a single IF
And as documented in the manual it needs to be ELSIF, not ELSE IF
Putting all that together, your trigger should be:
CREATE OR REPLACE TRIGGER account_pk_trig
BEFORE INSERT OR UPDATE ON accounts
FOR EACH ROW
BEGIN
IF inserting THEN
:NEW.accountnumber := accounts_seq.NEXTVAL;
ELSIF updating THEN
:NEW.accountnumber := :OLD.accountnumber;
END IF;
END;
/
As the trigger is declared as BEFORE INSERT OR UPDATE the ELSIF is actually useless, because it can only be insert or updating nothing else. So instead of ELSIF updating THEN you could simply write ELSE

Related

Trigger on a table that uses Merge Insert Update as Incremental strategy

Trigger under question is for Table which has ETL in ODI but user also has option to edit certain **columns **
if they want to adjust them. This is done using APEX
Trigger is used to change two columns : Changed_by and Change_on.
Both indicating Changes done on APEX PAGE only.
The issue comes when ODI load is run and is MERGE INSERT UPDATE , Trigger thinks its updating and changes the above two columns to "NULL" as its a manual update done by ODI and not on APEX.
Solution
For each Editable Column, there should be a logic which checks NEW: <> :OLD, but as i have 15 columns need to write a lot of code.
Are there others way to achieve this ?
create or replace TRIGGER DW.TRG BEFORE
UPDATE ON DW.TABLE
REFERENCING
NEW AS new
OLD AS old
FOR EACH ROW
BEGIN
IF updating THEN
SELECT
SYSDATE,
v('APP_USER')
INTO
:new.changed_on_dt,
:new.changed_by
FROM
dual;
END IF;
END;
Check if an apex session exists for the current database session and only execute when it is the case.
create or replace TRIGGER DW.TRG BEFORE
UPDATE ON DW.TABLE
REFERENCING
NEW AS new
OLD AS old
FOR EACH ROW
BEGIN
IF SYS_CONTEXT('APEX$SESSION','APP_SESSION') IS NOT NULL AND updating THEN
:new.changed_on_dt := SYSDATE;
:new.changed_by := SYS_CONTEXT('APEX$SESSION','APP_USER');
END IF;
END;
Notes
avoid the SELECT FROM DUAL, you can just assign the values in the trigger.
The "V" functions are pretty slow. For a while there have been sys_context settings that store the session and user data. Those are a lot faster than a function call to the "V" function.
You could make it so that it never overwrites a non-null value with a null one:
IF v('APP_USER') IS NOT NULL
THEN
:new.changed_by := v('APP_USER');
:new.changed_on_dt := SYSDATE;
END IF;

Before-insert trigger gets 'too many rows' error

I have a trigger:
create or replace trigger trig
before insert on sistem
for each row
declare
v_orta number;
begin
SELECT v_orta INTO :new.orta_qiymet
FROM sistem;
v_orta:=(:new.riyaziyyat+:new.fizika)/2;
insert into sistem(orta_qiymet)
values(v_orta);
end trig;
When I insert a row:
insert into sistem(riyaziyyat,fizika) values(4,4)
I get an error:
Why am I getting that error?
This is fundamentally not understanding how triggers work. You can't generally select from the table the trigger is against, and a before-insert trigger shouldn't not insert into the same table again - as that would just cause the trigger to fire again, infinitely (until Oracle notices and stops it). You aren't even currently using the v_orta value you're attempting to query.
I suspect you think the trigger is instead of your original insert perhaps, and really you want to set the orta_qiymet value in the newly-inserted row automatically based on the other two columns you have supplied. To do that you don't (and can't) select those values; instead you refer to the :NEW pseudorecord as you are already doing, and then set the third column value in that same pseudorow:
create or replace trigger trig
before insert on sistem
for each row
begin
:new.orta_qiymet := (:new.riyaziyyat + :new.fizika)/2;
end trig;
/
There is a lot of information in the documentation; this is similar to one of the examples.

while creating a audit trigger throwing warning as Compilation error

I try to create a audit trigger it throwing compilation error.
could you please help me for creating trigger..
DROP TRIGGER DB.DAT_CAMPLE_REQ_Test;
CREATE OR REPLACE TRIGGER DB."DAT_CAMPLE_REQ_Test"
AFTER insert or update or delete on DAT_CAMPLE_REQ
FOR EACH ROW
declare
dmltype varchar2(6);
BEGIN
if deleting then
INSERT INTO h_dat_cample_req VALUES (
:Old.REQUEST_ID,
:Old.SAMPLE_ID,
:Old.CASSAY_ID,
:Old.CASCADE_ID,
:Old.STATUS_ID,
:Old.AUTHOR,
:Old.CRT_SAE,
:Old.SCREEN_SAE
);
else
if inserting then
dmltype := 'insert';
elsif updating then
dmltype := 'update';
end if;
INSERT INTO h_dat_cample_req VALUES
(
:New.REQUEST_ID,
:New.SAMPLE_ID,
:New.CASSAY_ID,
:New.CASCADE_ID,
:New.STATUS_ID,
:New.AUTHOR,
:New.CRT_SAE,
:New.SCREEN_SAE
);
end if;
END;
You haven't provided the exact error message nor the structure of the table h_dat_cample_req, so I'm afraid I'm going to have to guess.
I suspect the column names in your h_dat_cample_req are not in the order you expect, or there are other columns in the table that you haven't specified a value for in your INSERT statements.
You are using INSERT statements without listing the columns that each value should go in to. The problem with using this form of INSERT statement is that if the columns in the table aren't in exactly the order you think they are, or there are columns that have been added or removed, you'll get an error and it'll be difficult to track it down. Furthermore, if you don't get a compilation error there's still the chance that data will be inserted into the wrong columns. Naming the columns makes it clear which value goes in which column, makes it easier to identify columns that have been removed, and also means that you don't have to specify values for all of the columns in the table - any column not listed gets a NULL value.
I would strongly recommend always naming columns in INSERT statements. In other words, instead of writing
INSERT INTO some_table VALUES (value_1, value_2, ...);
write
INSERT INTO some_table (column_1, column_2, ...) VALUES (value_1, value_2, ...);
Incidentally, you're assigning a value to your variable dmltype but you're not using its value anywhere. This won't cause a compilation error, but it is a sign that your trigger might not be doing quite what you would expect it to. Perhaps your h_dat_cample_req table is a history table and has a column for the type of operation performed?

PL/SQL Trigger - Dynamically reference :NEW or :OLD

Is it possible to dynamically reference the :NEW/OLD pseudo records, or copy them?
I'm doing a audit trigger for a very wide table, so would like to avoid having separate triggers for insert/delete/update.
When updating/inserting I want to record the :NEW values in the audit table, when deleting I want to record the :OLD values.
create or replace trigger audit_tgr
before insert or update or delete on 'table_name'
for each row
begin
if (inserting or updating) then
insert into audit table (a,b,c) values(:new.a,:new.b,:new.c);
else
insert into audit table (a,b,c) values(:old.a,:old.b,:old.c);
end;
You could try:
declare
l_deleting_ind varchar2(1) := case when DELETING then 'Y' end;
begin
insert into audit_table (col1, col2)
values
( CASE WHEN l_deleting_ind = 'Y' THEN :OLD.col1 ELSE :NEW.col1 END
, CASE WHEN l_deleting_ind = 'Y' THEN :OLD.col2 ELSE :NEW.col2 END
);
end;
I found that the variable was required - you can't access DELETING directly in the insert statement.
WOW, You want to have only ONE insert in your trigger to avoid what?
"I have a single insert statement INSERT INTO HIST ( EMP_ID, NAME ) VALUES (:NEW.EMP_ID , :NEW.NAME ) ; when deleting though, I want to use :OLD , not not have a seperate insert statement for that. "
It's a wide table. SO? It's not like there no REPLACE in text editors, you're not going to write the Insert again, just copy, paste, select, replace :NEW with :OLD.
Tony does have a solution but I seriously doubt that performs better than 2 inserts would perform.
What's the big deal?
EDIT
the main thing I'm trying to avoid is having to managed 2 inserts when the table changes. – Matthew Watson
I battle this attitude all the time. Those who write Java or C++ or .Net have a built-in RBO... Do this, this is good. Don't do that, that's bad. They write code according to these rules and that's fine. The problem is when these rules are applied to databases. Databases don't behave the same way code does.
In the code world, having essentially the same code in two "places" is bad. We avoid it. One would abstract that code to a function and call it from the two places and thus avoid maintaining it twice, and possibly missing one, etc. We all know the drill.
In this case, while it's true that in the end I recommend two inserts, they are separated by an ELSE. You won't change one and forget the other one. IT'S Right There. It's not in a different package, or in some compiled code, or even somewhere else in the same trigger. They're right beside each other, there's an ELSE and the Insert is repeated with :NEW, instead of :OLD. Why am I so crazed about this? Does it really make a difference here? I know two inserts won't be worse than other ideas, and it could be better.
The real reason is being prepared for the times when it does matter. If you're avoiding two inserts just for the sake of maintenance, you're going to miss the times when this makes a HUGE difference.
INSERT INTO log
SELECT * FROM myTable
WHERE flag = 'TRUE'
ELSE -- column omitted for clarity
INSERT INTO log
SELECT * FROM myTable
WHERE flag = 'FALSE'
Some, including Matthew, would say this is bad code, there are two inserts. I could easily replace 'TRUE' and 'FALSE' with a bind variable and flip it at will. And that's what most people would do. But if True is .1% of the values and 99.9% is False, you want two inserts, because you want two execution plans. One is better off with an index and the other an FTS. So, yes, you do have two Inserts to maintain. That's not always bad and in this case it's good and desirable.
You can use a compound trigger and programmatically check if it us I/U/D.
Compound Triggers
Why don't you use Oracle's built in standard or fine-grained auditing?
Use a compound trigger, as others have suggested. Save the old or new values, as appropriate, to variables, and use the variables in your insert statement:
declare
v_col1 table_name.col1%type;
v_col2 table_name.col2%type;
begin
if deleting then
v_col1 := :old.col1;
v_col2 := :old.col2;
else
v_col1 := :new.col1;
v_col2 := :new.col2;
end if;
insert into audit_table(col1, col2)
values(v_col1, v_col2);
end;

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources