In have an Oracle (10i) PL/SQL Row-Level trigger which is responsible for three independent tasks. As the trigger is relatively cluttered that way, I want to export these three tasks into three stored procedures.
I was thinking of using a my_table%ROWTYPE parameter or maybe a collection type for the procedures, but my main concern is how to fill these parameters.
Is there a way to put the whole :NEW row of a trigger into a single variable easily?
So far the only way I could find out was assigning each field separately to the variable which is not quite satisfying, looking at code maintenance etc.
Something like
SELECT :NEW.* INTO <variable> FROM dual;
would be preferred. (I haven't tried that actually but I suppose it wouldn't work)
In the vast majority of cases, the only way to assign the new values in the row to a %ROWTYPE variable would be to explicitly assign each column. Something like
CREATE OR REPLACE TRIGGER some_trigger_name
BEFORE INSERT OR UPDATE ON some_table
FOR EACH ROW
DECLARE
l_row some_table%rowtype;
BEGIN
l_row.column1 := :NEW.column1;
l_row.column2 := :NEW.column2;
...
l_row.columnN := :NEW.columnN;
procedure1( l_row );
procedure2( l_row );
procedure3( l_row );
END;
If your table happens to be declared based on an object, :NEW will be an object of that type. So if you have a table like
CREATE OR REPLACE TYPE obj_foo
AS OBJECT (
column1 NUMBER,
column2 NUMBER,
...
columnN NUMBER );
CREATE TABLE foo OF obj_foo;
then you could declare procedures that accept input parameters of type OBJ_FOO and call those directly from your trigger.
The suggestion in the other thread about selecting the row from the table in an AFTER INSERT/ UPDATE thread, unfortunately, does not generally work. That will generally lead to a mutating table exception.
1 create table foo (
2 col1 number,
3 col2 number
4* )
SQL> /
Table created.
SQL> create procedure foo_proc( p_foo in foo%rowtype )
2 as
3 begin
4 dbms_output.put_line( 'In foo_proc' );
5 end;
6 /
Procedure created.
SQL> create or replace trigger trg_foo
2 after insert or update on foo
3 for each row
4 declare
5 l_row foo%rowtype;
6 begin
7 select *
8 into l_row
9 from foo
10 where col1 = :new.col1;
11 foo_proc( l_row );
12 end;
13 /
Trigger created.
SQL> insert into foo values( 1, 2 );
insert into foo values( 1, 2 )
*
ERROR at line 1:
ORA-04091: table SCOTT.FOO is mutating, trigger/function may not see it
ORA-06512: at "SCOTT.TRG_FOO", line 4
ORA-04088: error during execution of trigger 'SCOTT.TRG_FOO'
It's not possible that way.
Maybe my answer to another question can help.
Use SQL to generate the SQL;
select ' row_field.'||COLUMN_NAME||' := :new.'||COLUMN_NAME||';' from
ALL_TAB_COLUMNS cols
where
cols.TABLE_NAME = 'yourTableName'
order by cols.column_name
Then copy and paste output.
This is similar to Justins solution but a little bit shorter (no typing of left part of each assignment) :
-- use instead of the assignments in Justins example:
select :new.column1,
:new.column2,
...
:new.columnN,
into l_row from dual;
Related
I am facing an issue wherein thru JMeter if I try to insert same record from two different transactions and at the same time (even the same second) then duplicate records appear in a table temp_tab . Even though we have trigger deployed to to avoid duplicate records getting inserted into temp_tab table. Due to design limitation we cannot use constraints on this table.
Need your valuable suggestion on this issue.
Below is the trigger code
SELECT COUNT(1) INTO row_c
FROM temp_tab
WHERE offer_id = oiv_pkg.trig_tab(idx).offer_id
AND view_id != oiv_pkg.trig_tab(idx).view_id
AND offer_inst_id != oiv_pkg.trig_tab(idx).offer_inst_id
AND subscr_no = oiv_pkg.trig_tab(idx).subscr_no
AND subscr_no_resets = oiv_pkg.trig_tab(idx).subscr_no_resets
AND view_status IN (view_types.cPENDING, view_types.cCURRENT)
AND disconnect_reason IS NULL
AND ((oiv_pkg.trig_tab(idx).active_dt >= active_dt AND
(oiv_pkg.trig_tab(idx).active_dt < inactive_dt OR inactive_dt IS NULL)) OR
(oiv_pkg.trig_tab(idx).active_dt < active_dt AND
(oiv_pkg.trig_tab(idx).inactive_dt IS NULL OR
oiv_pkg.trig_tab(idx).inactive_dt > active_dt)));
IF row_c > 0 THEN
oiv_pkg.trig_tab.DELETE;
raise_application_error (-20001, '269901, TRIG: INSERT Failed: OID: ' || oiv_pkg.trig_tab(idx).offer_inst_id ');
END IF;
If you really want to prevent duplicates without using the proper solution, a constraint, you'd need to implement some sort of locking mechanism. In this example, I'll create a table foo with a single column col1 and create a couple of triggers that ensure that the data in col1 is unique. In order to do this, I'm introducing a new table that exists just to have its single row locked to provide a serialization mechanism. Note that I'm only handling insert operations, I'm ignoring updates that create duplicates. I'm also simplifying the problem by not bothering to track which rows are inserted in row-level triggers in order to make the final check more efficient. Of course, serializing insert operations on your table will absolutely crush you application's scalability.
SQL> create table foo( col1 number );
Table created.
SQL> create table make_application_slow(
2 dummy varchar2(1)
3 );
Table created.
SQL> insert into make_application_slow values( 'A' );
1 row created.
SQL> ed
Wrote file afiedt.buf
1 create or replace trigger trg_foo_before_stmt
2 before insert on foo
3 declare
4 l_dummy varchar2(1);
5 begin
6 -- Ensure that only one session can ever be inserting data
7 -- at any time. This is a great way to turn a beefy multi-core
8 -- server into a highly overloaded server with one effective
9 -- core.
10 select dummy
11 into l_dummy
12 from make_application_slow
13 for update;
14* end;
SQL> /
Trigger created.
SQL> create or replace trigger trg_foo_after_stmt
2 after insert on foo
3 declare
4 l_cnt pls_integer;
5 begin
6 select count(*)
7 into l_cnt
8 from( select col1, count(*)
9 from foo
10 group by col1
11 having count(*) > 1 );
12
13 if( l_cnt > 0 )
14 then
15 raise_application_error( -20001, 'Duplicate data in foo is not allowed.' );
16 end if;
17 end;
18 /
Now, if you try to insert data with the same col1 value in two different sessions, the second session will block indefinitely waiting for the first session to commit (or rollback). That prevents duplicates but it is generally hideously inefficient. And if there is any possibility that a user would be able to walk away from an active transaction, your DBA will curse you for building an application that forces them to constantly kill sessions when someone locks up the entire application because they went to lunch without committing their work.
I am using Toad. I have a declaration of a table in a package as follows:
TYPE MyRecordType IS RECORD
(ID MyTable.ID%TYPE
,FIELD1 MyTable.FIELD1%TYPE
,FIELD2 MyTable.FIELD2%TYPE
,FIELD3 MyTable.FIELD3%TYPE
,ANOTHERFIELD VARCHAR2(80)
);
TYPE MyTableType IS TABLE OF MyRecordType INDEX BY BINARY_INTEGER;
There is a procedure (lets say MyProcedure), that is using an object of this table type as input/output. I want to run the procedure and see the results (how the table is filled). So I am thinking I will select the results from the table:
declare
IO_table MyPackage.MyTableType;
begin
MyPackage.MyProcedure (IO_table
,parameter1
,parameter2
,parameter3);
select * from IO_table;
end;
I get the message:
Table or view does not exist (for IO_table). If I remove the select line, the procedure runs successfully, but I cannot see its results. How can I see the contents of IO_table after I call the procedure?
You cannot see the results for a PL/SQL table by using Select * from IO_table
You will need to loop through the collection in the annonymous block.
do something like, given in pseudo code below...
declare
IO_table MyPackage.MyTableType;
l_index BINARY_INTEGER;
begin
MyPackage.MyProcedure (IO_table
,parameter1
,parameter2
,parameter3);
l_index := IO_table.first;
While l_index is not null
loop
dbms_output.put_line (IO_table(l_index).id);
.
.
.
.
l_index :=IO_table.next(l_index_id);
end loop;
end;
You have to do it like this:
select * from TABLE(IO_table);
and, of course you missed the INTO or BULK COLLECT INTO clause
1) You can not use associated arrays in SELECT statement, Just nested tables or varrays declared globally.
2) You should use TABLE() expression in SELECT statement
3) You can't simply use SELECT in PL/SQL code - cursor FOR LOOP or REF CURSOR or BULK COLLECT INTO or INTO must be used.
4) The last but not least - please study the manual:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjcol.htm#ADOBJ00204
Just an example:
SQL> create type t_obj as object( id int, name varchar2(10));
2 /
SQL> create type t_obj_tab as table of t_obj;
2 /
SQL> var rc refcursor
SQL> declare
2 t_var t_obj_tab := t_obj_tab();
3 begin
4 t_var.extend(2);
5 t_var(1) := t_obj(1,'A');
6 t_var(2) := t_obj(2,'B');
7 open :rc for select * from table(t_var);
8 end;
9 /
SQL> print rc
ID NAME
---------- ----------
1 A
2 B
I at trying to create trigger with the following code.
CREATE OR REPLACE TRIGGER MYTABLE_TRG
BEFORE INSERT ON MYTABLE
FOR EACH ROW
BEGIN
select MYTABLE_SEQ.nextval into :new.id from dual;
END;
I am getting error
Error(2,52): PLS-00049: bad bind variable 'NEW.ID'
Any ideas? Thanks.
It seems like the error code is telling you there's no such column ID in your table...
Somehow your environment is treating your code as SQL instead of a DDL statement. This works for me (running in sqlplus.exe from a command prompt):
SQL> create sequence mytable_seq;
Sequence created.
SQL> create table mytable (id number);
Table created.
SQL> CREATE OR REPLACE TRIGGER MYTABLE_TRG
2 BEFORE INSERT ON MYTABLE
3 FOR EACH ROW
4 BEGIN
5 select MYTABLE_SEQ.nextval into :new.id from dual;
6 END;
7 /
Trigger created.
Note the trailing "/" - this might be important in the application you are compiling this with.
if one would use proper naming convention the spotting of this type
of errors would be much easier ( where proper means using pre- and postfixes )
for generic object names hinting about their purpose better
i.e. something like this would have spotted the correct answer
--START -- CREATE A SEQUENCE
/*
create table "TBL_NAME" (
"TBL_NAME_ID" number(19,0) NOT NULL
, ...
*/
--------------------------------------------------------
-- drop the sequence if it exists
-- select * from user_sequences ;
--------------------------------------------------------
declare
c int;
begin
select count(*) into c from user_sequences
where SEQUENCE_NAME = upper('SEQ_TBL_NAME');
if c = 1 then
execute immediate 'DROP SEQUENCE SEQ_TBL_NAME';
end if;
end;
/
CREATE SEQUENCE "SEQ_TBL_NAME"
MINVALUE 1 MAXVALUE 999999999999999999999999999
INCREMENT BY 1 START WITH 1
CACHE 20 NOORDER NOCYCLE ;
-- CREATE
CREATE OR REPLACE TRIGGER "TRG_TBL_NAME"
BEFORE INSERT
ON "TBL_NAME"
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
tmpVar NUMBER;
BEGIN
tmpVar := 1 ;
SELECT SEQ_TBL_NAME.NEXTVAL INTO tmpVar FROM dual;
:NEW.TBL_NAME_ID := tmpVar;
END TRG_TBL_NAME;
/
ALTER TRIGGER "TRG_TBL_NAME" ENABLE;
-- STOP -- CREATE THE TRIGGER
If you're like me and your code should be working, try dropping the trigger explicitly before you re-create it. Stupid Oracle.
I have a table called TBL_CAS. In that, FLD_ID as auto increment column and another column is called FLD_CAS_CODE. Now I need to add CAS- as a prefix to FLD_ID and Insert into FLD_CAS_CODE. I need to do this in trigger. I was tried with the below code, But the data in not inserting, What is the problem ?
CREATE OR REPLACE TRIGGER TBL_CAS_TRG
BEFORE INSERT ON TBL_CAS
FOR EACH ROW
BEGIN
:NEW.FLD_CAS_CODE := TO_CHAR ('CAS')||'-'||:NEW.FLD_ID;
END;
I mean `"cas-"+"fld_id"="cas-fld_id"'
You don't need to put TO_CHAR() around things which are already charcater datatypes. But you should cast the numeric identifier (rather than relying on implicit conversion):
:NEW.FLD_CAS_CODE := 'CAS-'||TRIM(TO_CHAR (:NEW.FLD_ID));
which part isn't working exactly? as your trigger seem to work just fine.
SQL> create table TBL_CAS( FLD_ID number, FLD_CAS_CODE varchar2(20));
Table created.
SQL> CREATE OR REPLACE TRIGGER TBL_CAS_TRG
2 BEFORE INSERT ON TBL_CAS
3 FOR EACH ROW
4 BEGIN
5 :NEW.FLD_CAS_CODE := TO_CHAR ('CAS')||'-'||:NEW.FLD_ID;
6 END;
7 /
Trigger created.
SQL> insert into TBL_CAS (fld_id) values (1001);
1 row created.
SQL> select * From TBL_CAS;
FLD_ID FLD_CAS_CODE
---------- --------------------
1001 CAS-1001
SQL>
This will also work fine:
CREATE OR REPLACE TRIGGER TBL_AREA_CODES_TRG
BEFORE INSERT ON TBL_AREA_CODES
FOR EACH ROW
BEGIN
:NEW.OBRM_AREA_CODE := :NEW.STATE_CODE ||'-'||:NEW.DIST_CODE ||'-'||:NEW.CITY_CODE ||'-'||:NEW.AREA_CODE ;
END;
I am trying to select data into a pl/sql associative array in one query. I know I can do this with a hardcoded key, but I wanted to see if there was some way I could reference another column (the key column) instead.
DECLARE
TYPE VarAssoc IS TABLE OF varchar2(2) INDEX BY varchar2(3);
vars VarAssoc;
BEGIN
SELECT foo, bar INTO vars(foo) FROM schema.table;
END;
I get an error saying foo must be declared when I do this. Is there some way to create my associate array in a single query or do I need to fall back on a FOR loop?
Just read your comment on APC's answer, it sounds like you figured this out on your own. But I figured I'd put the answer in anyway for future searchers.
This is simpler code, but does not have the speed advantage of using BULK COLLECT. Just loop through the rows returned by the query and set the elements in the associative array individually.
DECLARE
TYPE VarAssoc IS TABLE OF varchar2(200) INDEX BY varchar2(30);
vars VarAssoc;
BEGIN
FOR r IN (SELECT table_name,tablespace_name FROM user_tables) LOOP
vars(r.table_name) := r.tablespace_name;
END LOOP;
dbms_output.put_line( vars('JAVA$OPTIONS') );
END;
It would be neat if it were possible but that isn't a straightforward way of acheiving this.
What we can do is load the data into a regular PL/SQL collection and then load that into an associative array. Whethter this is faster than just looping round the table is a matter of tatse: it probably doesn't matter unless we're dealing with loads of data.
Given this test data ...
SQL> select * from t23
2 order by c1
3 /
C1 C2
-- ---
AA ABC
BB BED
CC CAR
DD DYE
EE EYE
ZZ ZOO
6 rows selected.
SQL>
...we can populate an associative array in two steps:
SQL> set serveroutput on
SQL>
SQL> declare
2 type varassoc is table of varchar2(3) index by varchar2(2);
3 vars varassoc;
4
5 type nt is table of t23%rowtype;
6 loc_nt nt;
7
8 begin
9 select * bulk collect into loc_nt from t23;
10 dbms_output.put_line('no of recs = '||sql%rowcount);
11
12 for i in loc_nt.first()..loc_nt.last()
13 loop
14 vars(loc_nt(i).c1) := loc_nt(i).c2;
15 end loop;
16
17 dbms_output.put_line('no of vars = '||vars.count());
18
19 dbms_output.put_line('ZZ = '||vars('ZZ'));
20
21 end;
22 /
no of recs = 6
no of vars = 6
ZZ = ZOO
PL/SQL procedure successfully completed.
SQL>
The real question is probably whether populating an associative array performs better than just selecting rows in the table. Certainly if you have 11g Enterprise edition you should consider result set caching instead.
are you absolutely married to associative arrays? And I assume that you are doing this because you want to be able to do a lookup against the array using a character key.
If so, have you considered implementing this as a collection type instead?
e.g.
CREATE OR REPLACE TYPE VAR_ASSOC as OBJECT(
KEYID VARCHAR2(3),
DATAVAL VARCHAR2(2)
)
/
CREATE OR REPLACE TYPE VAR_ASSOC_TBL AS TABLE OF VAR_ASSOC
/
CREATE OR REPLACE PROCEDURE USE_VAR_ASSOC_TBL
AS
vars Var_Assoc_tbl;
-- other variables...
BEGIN
select cast ( multiset (
select foo as keyid,
bar as dataval
from schema.table
) as var_Assoc_tbl
)
into vars
from dual;
-- and later, when you want to do your lookups
select ot.newfoo
,myvars.dataval
,ot.otherval
into ....
from schema.other_Table ot
join table(vars) as myvars
on ot.newfoo = myvars.keyid;
end;
/
This gives you the lookup by character key value and lets you do everything in bulk.