Trigger not handling transactions initiated in parallel thru JMeter - oracle

I am facing an issue wherein thru JMeter if I try to insert same record from two different transactions and at the same time (even the same second) then duplicate records appear in a table temp_tab . Even though we have trigger deployed to to avoid duplicate records getting inserted into temp_tab table. Due to design limitation we cannot use constraints on this table.
Need your valuable suggestion on this issue.
Below is the trigger code
SELECT COUNT(1) INTO row_c
FROM temp_tab
WHERE offer_id = oiv_pkg.trig_tab(idx).offer_id
AND view_id != oiv_pkg.trig_tab(idx).view_id
AND offer_inst_id != oiv_pkg.trig_tab(idx).offer_inst_id
AND subscr_no = oiv_pkg.trig_tab(idx).subscr_no
AND subscr_no_resets = oiv_pkg.trig_tab(idx).subscr_no_resets
AND view_status IN (view_types.cPENDING, view_types.cCURRENT)
AND disconnect_reason IS NULL
AND ((oiv_pkg.trig_tab(idx).active_dt >= active_dt AND
(oiv_pkg.trig_tab(idx).active_dt < inactive_dt OR inactive_dt IS NULL)) OR
(oiv_pkg.trig_tab(idx).active_dt < active_dt AND
(oiv_pkg.trig_tab(idx).inactive_dt IS NULL OR
oiv_pkg.trig_tab(idx).inactive_dt > active_dt)));
IF row_c > 0 THEN
oiv_pkg.trig_tab.DELETE;
raise_application_error (-20001, '269901, TRIG: INSERT Failed: OID: ' || oiv_pkg.trig_tab(idx).offer_inst_id ');
END IF;

If you really want to prevent duplicates without using the proper solution, a constraint, you'd need to implement some sort of locking mechanism. In this example, I'll create a table foo with a single column col1 and create a couple of triggers that ensure that the data in col1 is unique. In order to do this, I'm introducing a new table that exists just to have its single row locked to provide a serialization mechanism. Note that I'm only handling insert operations, I'm ignoring updates that create duplicates. I'm also simplifying the problem by not bothering to track which rows are inserted in row-level triggers in order to make the final check more efficient. Of course, serializing insert operations on your table will absolutely crush you application's scalability.
SQL> create table foo( col1 number );
Table created.
SQL> create table make_application_slow(
2 dummy varchar2(1)
3 );
Table created.
SQL> insert into make_application_slow values( 'A' );
1 row created.
SQL> ed
Wrote file afiedt.buf
1 create or replace trigger trg_foo_before_stmt
2 before insert on foo
3 declare
4 l_dummy varchar2(1);
5 begin
6 -- Ensure that only one session can ever be inserting data
7 -- at any time. This is a great way to turn a beefy multi-core
8 -- server into a highly overloaded server with one effective
9 -- core.
10 select dummy
11 into l_dummy
12 from make_application_slow
13 for update;
14* end;
SQL> /
Trigger created.
SQL> create or replace trigger trg_foo_after_stmt
2 after insert on foo
3 declare
4 l_cnt pls_integer;
5 begin
6 select count(*)
7 into l_cnt
8 from( select col1, count(*)
9 from foo
10 group by col1
11 having count(*) > 1 );
12
13 if( l_cnt > 0 )
14 then
15 raise_application_error( -20001, 'Duplicate data in foo is not allowed.' );
16 end if;
17 end;
18 /
Now, if you try to insert data with the same col1 value in two different sessions, the second session will block indefinitely waiting for the first session to commit (or rollback). That prevents duplicates but it is generally hideously inefficient. And if there is any possibility that a user would be able to walk away from an active transaction, your DBA will curse you for building an application that forces them to constantly kill sessions when someone locks up the entire application because they went to lunch without committing their work.

Related

How to write triggers to enforce business rules?

I want to create triggers for practicing PL/SQL and I sorta got stuck with these two ones, which I'm sure they're simple, but I can't get a hold of this code.
The first trigger forbids an employee to have a salary higher than the 80% of their boss (The code is incomplete because I don't know how to continue):
CREATE OR REPLACE TRIGGER MAX_SALARY
BEFORE INSERT ON EMP
FOR EACH ROW
P.BOSS EMP.JOB%TYPE := 'BOSS'
P.SALARY EMP.SAL%TYPE
BEGIN
SELECT SAL FROM EMP
WHERE
JOB != P.BOSS
...
And the second one, there must not be less than two employees per department
CREATE TRIGGER MIN_LIMIT
AFTER DELETE OR UPDATE EMPNO
EMPLOYEES NUMBER(2,0);
BEGIN
SELECT COUNT(EMPNO)INTO EMPLOYEES FROM EMP
WHERE DEPTNO = DEPT.DEPTNO;
IF EMPLOYEES < 2 THEN
DBMS_OUTPUT.PUT_LINE('There cannot be less than two employees per department');
END IF;
END;
I really don't know If I'm actually getting closer or far from it altogether...
which I'm sure they're simple
Actually these tasks are not simple for triggers. The business logic is simple, and the SQL to execute the business logic is simple, but implementing it in triggers is hard. To understand why you need to understand how triggers work.
Triggers fire as part of a transaction, which means they are are applied to the outcome of a SQL statement such as an insert or an update. There are two types of triggers, row level and statement level triggers.
Row-level triggers fire once for every row in the result set we can reference values in the current row, which is useful for evaluating row-level rules.. But we cannot execute DML against the owning table: Oracle hurls ORA-04088 mutating table exception, because such actions violate transactional integrity.
Statement level triggers fire exactly once per statement. Consequently they are useful for enforcing table-level rules but crucially they have no access to the result set, which means they don’t know which records have been affected by the DML.
Both your business rules are table level rules, as they require the evaluation of more than one EMP record. So, can we enforce them through triggers? Let’s start with the second rule:
there must not be less than two employees per department
We could implement this with an trigger AFTER statement trigger like this:
CREATE or replace TRIGGER MIN_LIMIT
AFTER DELETE OR UPDATE on EMP
declare
EMPLOYEES pls_integer;
BEGIN
for i in ( select * from dept) loop
SELECT COUNT(EMPNO) INTO EMPLOYEES
FROM EMP
where i.DEPTNO = EMP.DEPTNO;
IF EMPLOYEES < 2 THEN
raise_application_error(-20042, 'problem with dept #' || i.DEPTNO || '. There cannot be less than two employees per department');
END IF;
end loop;
END;
/
Note this trigger uses RAISE_APPLICATION_ERROR() instead of DBMS_OUTPUT.PUT_LINE(). Raising an actual exception is always the best approach: messages can be ignored but exceptions must be handled.
The problem with this approach is that it will fail any update or delete of any employee, because the classic SCOTT.DEPT table has a record DEPTNO=40 which has no child records in EMP. So maybe we can be cool with departments which have zero employees but not with those which have just one?
CREATE or replace TRIGGER MIN_LIMIT
AFTER DELETE OR UPDATE on EMP
declare
EMPLOYEES pls_integer;
BEGIN
for i in ( select deptno, count(*) as emp_cnt
from emp
group by deptno having count(*) < 2
) loop
raise_application_error(-20042, 'problem with dept #' || i.DEPTNO || '. There cannot be less than two employees per department');
end loop;
END;
/
This will enforce the rule. Unless of course somebody tries to insert one employee into department 40:
insert into emp
values( 2323, 'APC', ‘DEVELOPER', 7839, sysdate, 4200, null, 40 )
/
We can commit this. It will succeed because our trigger doesn’t fire on insert. But some other user’s update will subsequently fail. Which is obviously bobbins. So we need to include INSERT in the trigger actions.
CREATE or replace TRIGGER MIN_LIMIT
AFTER INSERT or DELETE OR UPDATE on EMP
declare
EMPLOYEES pls_integer;
BEGIN
for i in ( select deptno, count(*) as emp_cnt
from emp
group by deptno having count(*) < 2
) loop
raise_application_error(-20042, 'problem with dept #' || i.DEPTNO || '. There cannot be less than two employees per department');
end loop;
END;
/
Unfortunately now we cannot insert one employee in department 40:
ORA-20042: problem with dept #40. There cannot be less than two employees per department
ORA-06512: at "APC.MIN_LIMIT", line 10
ORA-06512: at "SYS.DBMS_SQL", line 1721
We need to insert two employees in a single statement:
insert into emp
select 2323, 'APC', 'DEVELOPER', 7839, sysdate, 4200, null, 40 from dual union all
select 2324, 'ANGEL', 'DEVELOPER', 7839, sysdate, 4200, null, 40 from dual
/
Note that switching existing employees to a new department has the same limitation: we have to update at least two employees in the same statement.
The other problem is that the trigger may perform badly, because we have to query the whole table after every statement. Perhaps we can do better? Yes. A compound trigger (Oracle 11g and later) allows us to track the affected records for use in a statement level AFTER trigger. Let’s see how we can use one to implement the first rule
No employee can have a salary higher than the 80% of their boss
Compound triggers are highly neat. They allow us to share program constructs across all the events of the trigger. This means we can store the values from row-level events in a collection, which we can use to drive some SQL in an statement level AFTER code..
So this trigger fires on three events. Before a SQL statement is processed we initialise a collection which uses the projection of the EMP table. The code before row stashes the pertinent values from the current row, if the employee has a manager. (Obviously the rule cannot apply to President King who has no boss). The after code loops through the stashed values, looks up the salary of the pertinent manager and evaluates the employee's new salary against their boss's salary.
CREATE OR REPLACE TRIGGER MAX_SALARY
FOR INSERT OR UPDATE ON EMP
COMPOUND TRIGGER
type emp_array is table of emp%rowtype index by simple_integer;
emps_nt emp_array ;
v_idx simple_integer := 0;
BEFORE STATEMENT IS
BEGIN
emps_nt := new emp_array();
END BEFORE STATEMENT;
BEFORE EACH ROW IS
BEGIN
v_idx := v_idx + 1;
if :new.mgr is not null then
emps_nt(v_idx).empno := :new.empno;
emps_nt(v_idx).mgr := :new.mgr;
emps_nt(v_idx).sal := :new.sal;
end if;
END BEFORE EACH ROW;
AFTER EACH ROW IS
BEGIN
null;
END AFTER EACH ROW;
AFTER STATEMENT IS
mgr_sal emp.sal%type;
BEGIN
for i in emps_nt.first() .. emps_nt.last() loop
select sal into mgr_sal
from emp
where emp.empno = emps_nt(i).mgr;
if emps_nt(i).sal > (mgr_sal * 0.8) then
raise_application_error(-20024, 'salary of empno ' || emps_nt(i).empno || ' is too high!');
end if;
end loop;
END AFTER STATEMENT;
END;
/
This code will check every employee if the update is universal, for instance when everybody gets a 20% pay rise...
update emp
set sal = sal * 1.2
/
But if we only update a subset of the EMP table it only checks the boss records it needs to:
update emp set sal = sal * 1.2
where deptno = 20
/
This makes it more efficient than the previous trigger. We could re-write trigger MIN_LIMIT as a compound trigger; that is left as an exercise for the reader :)
Likewise, each trigger fails as soon as a single violating row is found:
ORA-20024: salary of empno 7902 is too high!
ORA-06512: at "APC.MAX_SALARY", line 36
It would be possible to evaluate all affected rows, stash the violating row(s) in another collection then display all the rows in the collection. Another exercise for the reader.
Finally, note that having two triggers fire on the same event on the same table is not good practice. It's generally better (more efficient, easier to debug) to have one trigger which does everything.
An after thought. What happens to Rule #1 if one session increases the salary of an employee whilst simultaneously another session decreases the salary of the boss? The trigger will pass both updates but we can end up with a violation of the rule. This is an inevitable consequence of the way triggers work with Oracle's read-commit transaction consistency. There is no way to avoid it except by employing a pessimistic locking strategy and pre-emptively locking all the rows which might be affected by a change. That may not scale and is definitely hard to implement using pure SQL: it needs stored procedures. This is another reason why triggers are not good for enforcing business rules.
I'm using Oracle10g
That is unfortunate. Oracle 10g has been obsolete for almost a decade now. Even 11g is deprecated. However, if you really have no option but to stick with 10g you have a couple of options.
The first is to grind through the whole table, doing the lookups of each boss for every employee. This is just about bearable for a toy table such as EMP but likely to be a performance disaster in real life.
The better option is to fake compound triggers using the same workaround we all used to apply: write a package. We rely on global variables - collections - to maintain state across calls to packaged procedures, and have different triggers to make those calls. Basically you need one procedure call for each trigger and one trigger for each step in the compound trigger. #JustinCave posted an example of how to do this on another question; it should be simple to translate my code above to his template.
Please handle these kind of validations/business logic at application or at DB level using procedures/functions instead of using triggers which most of the times slows down the DML operations/statements on which the triggers are based.
If you handle business logic at application or procedure level then, the DB server will have to execute only DML statements; it does not have to execute TRIGGER-executing trigger involves handling exceptions; prior to that DML statement will place a lock on the table on which DML (except for INSERT statement-Exclusive shared lock) is being executed until TRIGGER is executed.

Insert all records bar those in a banned list

I have before insert trigger on table1. If some data (ID) is not allowed an application error is raised.
But, when I use, for example, insert into table1 select id from table2 where id in (1,2,3) And if only ID '3' is not allowed, the others ID's (1 and 2) are not inserted as well.
How can I overcome this? The trigger code is similar to:
CREATE OR REPLACE TRIGGER t1_before_insert BEFORE INSERT
ON table1
FOR EACH ROW
DECLARE
xx number(20);
BEGIN
select id into xx from blocked_id where id=:new.id;
if :new.id=xx then raise_application_error(-20001, '--');
end if;
END;
Okay, two points. Firstly, you're risking a NO_DATA_FOUND exception with your SELECT INTO ..., raising this exception will kill your entire insert. Secondly you're raising an exception, which will stop your entire insert.
You need to ignore those IDs that are in your blocked table rather than raise an exception. To follow your original idea one method would be to utilise the NO_DATA_FOUND exception to only insert if nothing is found. I create a view on your table and define an INSTEAD OF trigger on this.
I would not use this method though (see below)
If we set-up a test-environment:
SQL> create table tmp_test ( id number );
Table created.
SQL> create table tmp_blocked ( id number );
Table created.
SQL> insert into tmp_blocked values (3);
1 row created.
Then you can use the following:
SQL> create or replace view v_tmp_test as select * from tmp_test;
View created.
SQL> create or replace trigger tr_test
2 instead of insert on v_tmp_test
3 for each row
4
5 declare
6
7 l_id tmp_test.id%type;
8
9 begin
10
11 select id into l_id
12 from tmp_blocked
13 where id = :new.id;
14
15 exception when no_data_found then
16 insert into tmp_test values (:new.id);
17 end;
18 /
Trigger created.
SQL> show error
No errors.
SQL> insert into v_tmp_test
2 select level
3 from dual
4 connect by level <= 3;
3 rows created.
SQL> select * from tmp_test;
ID
----------
1
2
As I said, I would not use triggers; a more efficient way of doing it would be to use MERGE. Using the same set-up as above.
SQL> merge into tmp_test o
2 using ( select a.id
3 from ( select level as id
4 from dual
5 connect by level <= 3 ) a
6 left outer join tmp_blocked b
7 on a.id = b.id
8 where b.id is null
9 ) n
10 on ( o.id = n.id )
11 when not matched then
12 insert values (n.id);
2 rows merged.
SQL>
SQL> select * from tmp_test;
ID
----------
1
2
An even easier alternative would be to just use a MINUS
insert into tmp_test
select level
from dual
connect by level <= 3
minus
select id
from tmp_banned
I really don't like the use of a trigger and an error in this way -- data integrity is not really what triggers are for. This seems to me to be a part of the application that should be included in the application code, perhaps in a procedure that acts as an API for inserts into the table.

Find out if a collection was populated by bulk collect

I created an oracle Object Type like this:
CREATE OR REPLACE TYPE DFBOWNER."RPT_WIRE_IMPORT_ROWTYPE" AS OBJECT
(
REC_VALUE_DATE DATE
)
/
And then a collection based on this type:
CREATE OR REPLACE TYPE DFBOWNER."RPT_WIRE_IMPORT_TABLETYPE" IS TABLE OF RPT_WIRE_IMPORT_RowType;
/
Now I populate the collection using oracle bulk collect into syntax inside a procedure.
So now i want to test if the collection actually got populated, and i am not sure how to do it.
I tried looking it up:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28371/adobjcol.htm#autoId17 but I am not able to find what I need.
I also have another question. When the procedure bulk collects data into collections, does the data in the collection become permanent as in a table? Or is it semi-permanent...i.e. only lives for the session...as in a temp table.
I suspect you are looking for the COUNT method, i.e.
DECLARE
l_local_collection dbfowner.rpt_wire_import_tabletype;
BEGIN
SELECT sysdate + level
BULK COLLECT INTO l_local_collection
FROM dual
CONNECT BY level <= 10;
dbms_output.put_line( 'l_local_collection contains ' ||
l_local_collection.count ||
' elements.' );
END;
Like any local variable, l_local_collection will have the scope of the block in which it is declared. The data is stored in the PGA for the session. The data in a collection is not permanent.
You can select from the local collection
SQL> create type some_object as object (
2 rec_value_date date
3 );
4 /
Type created.
SQL> create type some_coll
2 as table of some_object;
3 /
Type created.
SQL> ed
Wrote file afiedt.buf
1 declare
2 l_local_collection some_coll;
3 begin
4 select some_object( sysdate + numtodsinterval( level, 'day' ) )
5 bulk collect into l_local_collection
6 from dual
7 connect by level <= 10;
8 for x in (select * from table( l_local_collection ))
9 loop
10 dbms_output.put_line( x.rec_value_date );
11 end loop;
12* end;
SQL> /
20-AUG-12
21-AUG-12
22-AUG-12
23-AUG-12
24-AUG-12
25-AUG-12
26-AUG-12
27-AUG-12
28-AUG-12
29-AUG-12
PL/SQL procedure successfully completed.
but it generally doesn't make sense to go through the effort of pulling all the data from the SQL VM into the PL/SQL VM only to then pass all of the data back to the SQL VM in order to issue the SELECT statement. It would generally make more sense to just keep the data in SQL or to define a pipelined table function to return the data.
If you merely want to iterate over the elements in the collection
SQL> ed
Wrote file afiedt.buf
1 declare
2 l_local_collection some_coll;
3 begin
4 select some_object( sysdate + numtodsinterval( level, 'day' ) )
5 bulk collect into l_local_collection
6 from dual
7 connect by level <= 10;
8 for i in 1 .. l_local_collection.count
9 loop
10 dbms_output.put_line( l_local_collection(i).rec_value_date );
11 end loop;
12* end;
SQL> /
20-AUG-12
21-AUG-12
22-AUG-12
23-AUG-12
24-AUG-12
25-AUG-12
26-AUG-12
27-AUG-12
28-AUG-12
29-AUG-12
PL/SQL procedure successfully completed.
It would make much more sense to iterate over the elements in the collection, which keeps everything in PL/SQL, than to SELECT from the collection, which forces all the data back into the SQL VM.

how to fire two oracle insert statement together?Oracle 11g

I am trying to fire two insert statements at a time. Actually i have tried with below query but its inserting in only one table.
EXECUTE IMMEDIATE 'select * from abc.test where test_NAME = ''aaa''' BULK COLLECT INTO T_SC;
IF T_SC.count = 0 THEN
Insert into abc.test (test_ID,test_NAME,status)
VALUES(1,'aaa','a') BULK COLLECT INTO insert_cnt;
IF insert_cnt.count = 1 THEN
INSERT INTO abc.test1(test1_id,test1_NAME,test1_ALIAS,test_ID)
VALUES(1,'bbb','b',1);
COMMIT;
END IF;
it is only inserting in abc.test1 table..What i am going to missing. If anyone knows than plz help me in this.
This whole code of yours doesn't seem right:
why the dynamic sql ?
there are lots of syntax errors
unclosed "if" (as vj shah commented)
missing returning keyword
why do you need the bulk collect if your returning from the insert only one row ?
what is the second "if" for ?
and so on...
Anyway, this code works:
EXECUTE IMMEDIATE 'select * from abc.test where test_NAME = ''aaa''' BULK COLLECT INTO T_SC;
/* BTW, why not
select * bulk collect into T_SC from abc.test where test_NAME = 'aaa';
*/
IF T_SC.count = 0 THEN
Insert into abc.test (test_ID, test_NAME, status)
VALUES(1,'aaa','a') returning test_ID, test_NAME, status BULK COLLECT INTO insert_cnt;
IF insert_cnt.count = 1 THEN
INSERT INTO abc.test1(test1_id,test1_NAME,test1_ALIAS,test_ID)
VALUES(1,'bbb','b',1);
END IF;
COMMIT;
END IF;
Can you explain your problem a little better. Neither your logic nor the data that you show gives any idea of what you are trying to accomplish. (the logic behind the if).
This is also not functional code (too many syntax errors), can you update with the real code you are firing? May be just change the table names?
If you want to make sure both the statements either complete successfully or both of them are rolled back, your approach of including them in a block is correct.
SQL> create table test_rc_2(
2 id number
3 );
Table created.
--Sample 1 : Submitting inserts seperately (only the latest statement is rolled
SQL> insert into test_rc_2 values (100);
1 row created.
SQL> insert into test_rc_2 values ('hello');
insert into test_rc_2 values ('hello')
*
ERROR at line 1:
ORA-01722: invalid number
SQL> commit;
Commit complete.
SQL> select * from test_rc_2;
ID
----------
100
--case 2 : submittig them in a block.
SQL> truncate table test_rc_2
2 ;
Table truncated.
SQL> begin
2 insert into test_rc_2 values(100);
3 insert into test_rc_2 values('hello..');
4 end;
5 /
begin
*
ERROR at line 1:
ORA-01722: invalid number
ORA-06512: at line 3
SQL> commit;
Commit complete.
SQL> select * from test_rc_2;
no rows selected

Oracle PL/SQL: Forwarding whole row to procedure from a trigger

In have an Oracle (10i) PL/SQL Row-Level trigger which is responsible for three independent tasks. As the trigger is relatively cluttered that way, I want to export these three tasks into three stored procedures.
I was thinking of using a my_table%ROWTYPE parameter or maybe a collection type for the procedures, but my main concern is how to fill these parameters.
Is there a way to put the whole :NEW row of a trigger into a single variable easily?
So far the only way I could find out was assigning each field separately to the variable which is not quite satisfying, looking at code maintenance etc.
Something like
SELECT :NEW.* INTO <variable> FROM dual;
would be preferred. (I haven't tried that actually but I suppose it wouldn't work)
In the vast majority of cases, the only way to assign the new values in the row to a %ROWTYPE variable would be to explicitly assign each column. Something like
CREATE OR REPLACE TRIGGER some_trigger_name
BEFORE INSERT OR UPDATE ON some_table
FOR EACH ROW
DECLARE
l_row some_table%rowtype;
BEGIN
l_row.column1 := :NEW.column1;
l_row.column2 := :NEW.column2;
...
l_row.columnN := :NEW.columnN;
procedure1( l_row );
procedure2( l_row );
procedure3( l_row );
END;
If your table happens to be declared based on an object, :NEW will be an object of that type. So if you have a table like
CREATE OR REPLACE TYPE obj_foo
AS OBJECT (
column1 NUMBER,
column2 NUMBER,
...
columnN NUMBER );
CREATE TABLE foo OF obj_foo;
then you could declare procedures that accept input parameters of type OBJ_FOO and call those directly from your trigger.
The suggestion in the other thread about selecting the row from the table in an AFTER INSERT/ UPDATE thread, unfortunately, does not generally work. That will generally lead to a mutating table exception.
1 create table foo (
2 col1 number,
3 col2 number
4* )
SQL> /
Table created.
SQL> create procedure foo_proc( p_foo in foo%rowtype )
2 as
3 begin
4 dbms_output.put_line( 'In foo_proc' );
5 end;
6 /
Procedure created.
SQL> create or replace trigger trg_foo
2 after insert or update on foo
3 for each row
4 declare
5 l_row foo%rowtype;
6 begin
7 select *
8 into l_row
9 from foo
10 where col1 = :new.col1;
11 foo_proc( l_row );
12 end;
13 /
Trigger created.
SQL> insert into foo values( 1, 2 );
insert into foo values( 1, 2 )
*
ERROR at line 1:
ORA-04091: table SCOTT.FOO is mutating, trigger/function may not see it
ORA-06512: at "SCOTT.TRG_FOO", line 4
ORA-04088: error during execution of trigger 'SCOTT.TRG_FOO'
It's not possible that way.
Maybe my answer to another question can help.
Use SQL to generate the SQL;
select ' row_field.'||COLUMN_NAME||' := :new.'||COLUMN_NAME||';' from
ALL_TAB_COLUMNS cols
where
cols.TABLE_NAME = 'yourTableName'
order by cols.column_name
Then copy and paste output.
This is similar to Justins solution but a little bit shorter (no typing of left part of each assignment) :
-- use instead of the assignments in Justins example:
select :new.column1,
:new.column2,
...
:new.columnN,
into l_row from dual;

Resources