PL/SQL Dynamic SQL USING clause - oracle

I am working with an Oracle 11g database, release 11.2.0.3.0 - 64 bit production
I have several defined packages, procedures, functions and data types. After numerous intermediate calculations largely done using collections, arrays and other data structure, I ultimately need to create a database table dynamically to output my final results. For the purpose of this question, I have the following:
TYPE ids_t IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
benefit_ids ids_t;
--Lots of other code which successfully populates benefit_ids.
--benefit_ids has several million rows, and is used successfully as
the input to the following function:
FUNCTION find_max_ids(in_ids in ids_t)
RETURN ids_t
IS
str_sql varchar2(200);
return_ids ids_t;
BEGIN
str_sql := 'SELECT max(b.benefit_id)
FROM TABLE(:1) a
JOIN benefits b ON b.benefit_id = a.column_value
GROUP BY b.benefit_id';
EXECUTE IMMEDIATE str_sql BULK COLLECT INTO return_ids USING in_ids;
RETURN return_ids;
END;
The above works fine and clearly demonstrates that it is possible to pass an array as a parameter to a dynamic sql function or procedure.
However, when I try using EXECUTE IMMEDIATE and USING to create a database table as my final output I run into problems:
PROCEDURE create_output_table(in_ids in ids_t, in_tbl_nme in varchar2)
AUTHID CURRENT_USER
IS
str_sql := 'CREATE TABLE Final_Results AS (
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(:1) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL)';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
END;
Rather unhelpfully the only error message I receive back is ORA-00933: SQL command not properly ended. However, I can't see anything wrong with the syntax per se, though I suspect the problem is with how I am applying the EXECUTE IMMEDIATE in this instance.
Any advice would be gratefully received.

The code you've shown doesn't get ORA-00933, but it still isn't valid:
create type ids_t is table of number
/
create table test_table (client_id number, benefit_id number)
/
insert into test_table values (1, 1)
/
declare
str_sql varchar2(4000);
in_tbl_nme varchar2(30) := 'TEST_TABLE';
in_ids ids_t := ids_t(1, 2, 3);
begin
str_sql := 'CREATE TABLE Final_Results AS (
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(:1) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL)';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
end;
/
Error report -
ORA-22905: cannot access rows from a non-nested table item
That error doesn't look right; lets cast it to see if it's happier, even though it shouldn't be necessary:
declare
str_sql varchar2(4000);
in_tbl_nme varchar2(30) := 'TEST_TABLE';
in_ids ids_t := ids_t(1, 2, 3);
begin
str_sql := 'CREATE TABLE Final_Results AS (
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(CAST(:1 AS ids_t)) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL)';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
end;
/
Error report -
ORA-01027: bind variables not allowed for data definition operations
That error is described in this article.
So you need to create and populate the table in two steps:
declare
str_sql varchar2(4000);
in_tbl_nme varchar2(30) := 'TEST_TABLE';
in_ids ids_t := ids_t(1, 2, 3);
begin
str_sql := 'CREATE TABLE Final_Results AS
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
WHERE 1=0'; -- or anything that always evaluates to false
EXECUTE IMMEDIATE str_sql;
str_sql := 'INSERT INTO Final_Results (client_id, benefit_id)
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(CAST(:1 AS ids_t)) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
end;
/
PL/SQL procedure successfully completed.
select * from final_results;
CLIENT_ID BENEFIT_ID
---------- ----------
1 1
Creating a table on the fly isn't generally a good idea; aside from schema management and maintainability considerations, you have to be sure that only one session is calling the procedure and that the table doesn't already exist. If you have a process that does this work, uses the results and then drops the table then you still have to be sure it cannot be run simultaneously, and can be restarted if it fails part way through.
If all the work is done in the same session then you could create a (permanent) global temporary table instead, as a one-off schema set-up task. The insert to populate it would still have to be dynamic as in_table_nme isn't known, but it would be a bit of an improvement. (I'm not sure why your query in find_max_ids is dynamic though, unless you're also creating benefits dynamically). Or depending on the amount of data involved, you could use another collection type, instead of a table.
The data in a GTT is only visible to that session, and is destroyed when it ends. If that isn't appropriate then a normal table could be created once, which would be better than creating/dropping it dynamically. You still need to prevent multiple sessions running the process simultaneous in that case though, as they might not see the data they expect.

Related

Referencing Schema Name as Variable in Oracle Procedure

Is there a way to set schema name as variable in oracle procedure?
create or replace procedure test is
v_schema varchar2(30);
begin
insert into v_schema.tab_a ( a, b)
select (a, b) from xyz;
end;
/
Thanks
You'd need to resort to dynamic SQL
create or replace procedure test
is
v_schema varchar2(30);
v_sql varchar2(1000);
begin
v_sql := 'insert into ' || v_schema || '.tab_a( a, b ) ' ||
'select a, b from xyz';
dbms_output.put_line( 'About to execute the statement ' || v_sql );
execute immediate v_sql;
end;
A couple of points
You almost certainly want to build the SQL statement in a local variable that you can print out and/or log before executing it. Otherwise, when there are syntax errors, you're going to have a much harder time debugging.
You almost never want to resort to dynamic SQL in the first place. The fact that you have a procedure where you know you want to insert all the rows from xyz into a table named tab_a but you don't know which schema that table is in is a red flag. That's unusual and often indicates a problem with your design. Very, very occasionally dynamic SQL is a wonderful tool when you need extra flexibility. But more often than not when you're thinking about a problem and dynamic SQL is the answer you want to reconsider the problem.

Trying to use a FORALL to insert data dynamically to a table specified to the procedure

I have the need to dynamic know the name of the table that has the same data structure as many others and I can pass in a generic associative array that is of the same structure. Here is the proc
PROCEDURE INSRT_INTER_TBL(P_TABLE_NAME IN VARCHAR2, P_DATA IN tt_type)
IS
BEGIN
FORALL i IN P_DATA.FIRST .. P_DATA.LAST
EXECUTE IMMEDIATE
'INSERT INTO ' || P_TABLE_NAME ||
' VALUES :1'
USING P_DATA(i);
END INSRT_INTER_TBL;
I am getting the following error
ORA-01006: bind variable does not exist
What am I missing here?
So I had to specify all the columns necessary to insert to the table out in the insert statement like:
PROCEDURE INSRT_INTER_TBL(P_TABLE_NAME IN VARCHAR2, P_DATA IN inter_invc_ln_item_type)
IS
BEGIN
FORALL i IN P_DATA.FIRST .. P_DATA.LAST
EXECUTE IMMEDIATE
'INSERT INTO ' || P_TABLE_NAME || ' (ITEM_PK, pk, units, amt) ' ||
' VALUES (:P_INVC_LN_ITEM_PK, :PK, :UNITS, :AMT)'
USING IN P_DATA(i).item_pk, P_DATA(i).pk, P_DATA(i).units, P_DATA(i).amt;
END INSRT_INTER_TBL;
The TABLE operator works better than a FORALL here. It uses less code and probably skips some SQL-to-PL/SQL context switches.
--Simple record and table type.
create or replace type tt_rec is object
(
a number,
b number
);
create or replace type tt_type is table of tt_rec;
--Sample schema that will hold results.
create table test1(a number, b number);
--PL/SQL block that inserts TT_TYPE into a table.
declare
p_table_name varchar2(100) := 'test1';
p_data tt_type := tt_type(tt_rec(1,1), tt_rec(2,2));
begin
execute immediate
'
insert into '||p_table_name||'
select * from table(:p_data)
'
using p_data;
commit;
end;
/
You can run the above code in this SQL Fiddle.
Try VALUES (:1) i.e. have brackets around :1

Fetching the record one by one dynamically Oracle

I am creating a dynamic procedure which could accept 2 table names.Fetch the records from one table and after certain record (let's say 100 records) i have to issue the commit command.
Both tabName and temp_tabName are always be identical.Since I have billions of records in first table i am doing the commit after every 10000 records in order to get rid of undo table space problem.
Till now what i did is :
CREATE OR REPLACE PROCEDURE MyProdecure (
tabName IN USER_TABLES.table_name%TYPE,
temp_tabName IN USER_TABLES.table_name%TYPE
)
IS
v_sql VARCHAR2 (100) := 'select * from ' || tabName;
TEMP_CURSOR SYS_REFCURSOR;
COUNT NUMBER (6) := 0;
BEGIN
OPEN TEMP_CURSOR FOR v_sql;
LOOP
FETCH TEMP_CURSOR INTO V_ROW;
--=================================================================================
/*
* I need the code here to fetch the 100 record from TEMP_CURSOR into a Variable
* and insert into the second table. or one record increment the count and if
* count>= 100 commit
*What would be the data type of V_ROW. How to fetch the data from V_ROW and complete the insert into command.
*/
--================================================================================
EXIT WHEN TEMP_CURSOR%NOTFOUND;
END LOOP;
CLOSE TEMP_CURSOR;
END MyProdecure;
There is no way to define V_ROW in such a way as to make your PL/SQL block work correctly for an input table whose name and structure is not known until runtime.
To make your approach work, you would need to use DBMS_SQL.
Have you considered a variation of the following, to bypass the vast majority of the UNDO generation?
CREATE OR REPLACE PROCEDURE MyProcedure (
tabName IN USER_TABLES.table_name%TYPE,
temp_tabName IN USER_TABLES.table_name%TYPE
)
IS
l_log_io NUMBER;
C_BLOCK_SIZE NUMBER := 8192; -- assuming 8192 byte block size
l_undo_bytes NUMBER;
BEGIN
EXECUTE IMMEDIATE 'INSERT /*+ APPEND */ INTO ' || temp_tabName ||
' SELECT * FROM ' || tabName;
select t.log_io, t.used_ublk*C_BLOCK_SIZE undo_bytes
into l_log_io, l_undo_bytes
from v$transaction t
where t.addr = ( SELECT s.taddr FROM v$session s WHERE s.sid = USERENV('SID'));
dbms_output.put_line('Undo bytes used: ' || l_undo_bytes);
END;
INSERT /*+ APPEND */ comes with a number of caveats that you should look into before using it, but it could be a much simpler way of accomplishing your goal.

Oracle cursor with variable columns/tables/criteria

I need to open a cursor while table name, columns and where clause are varying. The table name etc will be passed as parameter. For example
CURSOR batch_cur
IS
SELECT a.col_1, b.col_1
FROM table_1 a inner join table_2 b
ON a.col_2 = b.col_2
WHERE a.col_3 = 123
Here, projected columns, table names, join criteria and where clause will be passed as parameters. Once opened, i need to loop through and process each fetched record.
You need to use dynamic SQL something like this:
procedure dynamic_proc
( p_table_1 varchar2
, p_table_2 varchar2
, p_value number
)
is
batch_cur sys_refcursor;
begin
open batch_cur for
'select a.col_1, b.col_1
from ' || p_table_1 || ' a inner join || ' p_table_2 || ' b
on a.col_2 = b.col_2
where a.col_3 = :bind_value1';
using p_value;
-- Now fetch data from batch_cur...
end;
Note the use of a bind variable for the data value - very important if you will re-use this many times with different values.
From your question i guess you need a dynamic cursor. Oracle provides REFCURSOR for dynamic sql statements. Since your query will be built dynamically you need a refcursor to do that.
create procedure SP_REF_CHECK(v_col1 number,v_col2 date,v_tab1 number,v_var1 char,v_var2 varchar2)
is
Ref_cur is REF CURSOR;
My_cur Ref_cur;
My_type Table_name%rowtype;
stmt varchar2(500);
begin
stmt:='select :1,:2 from :3 where :4=:5';
open My_cur for stmt using v_col1,v_col2,v_tab1,v_var1,v_var2;
loop
fetch My_cur into My_type;
//do some processing //
exit when My_cur%notfound;
end loop;
close My_cur;
end;
Check this link for more http://docs.oracle.com/cd/B10500_01/appdev.920/a96624/11_dynam.htm

Can I see the DML inside an Oracle trigger?

Is it possible to see the DML (SQL Statement) that is being run that caused a trigger to be executed?
For example, inside an INSERT trigger I would like to get this:
"insert into myTable (name) values ('Fred')"
I read about ora_sql_txt(sql_text) in articles such as this but couldn't get it working - not sure if that is even leading me down the right path?
We are using Oracle 10.
Thank you in advance.
=========================
[EDITED] MORE DETAIL: We have the need to replicate an existing database (DB1) into a classified database (DB2) that is not accessible via the network. I need to keep these databases in sync. This is a one-way sync from (DB1) to (DB2), since (DB2) will contain additional tables and data that is not contained in the (DB1) system.
I have to determine a way to sync these databases without bringing them down (say, for a backup and restore) because it needs to stay live. So I thought that if I can store the actual DML being run (when data changes), I could "play-back" the DML on the new database to update it, just like someone was hand-entering it back in.
I can't bring over all the data because of the sheer size of it, and I can't just copy over the changed records because of FK constraints and the order in which I insert/update records. I figured that if I could "play-back" a log of what happened, using the exact SQL that changed the master, I could keep the databases in sync.
My current plan of attack was to keep a log of all records that were changed, inserted, and deleted and when I want to sync, the system generates DML to insert/update/delete those records. Then I just take the .SQL file to the classified system and run the script. The problem I'm running into are FKs. (Because when I generate the DML I only know what the current state of the data is, not it's path to get there - so ordering of statements is an issue). I guess I could disable all FK's, do the merge, then re-enable all FK's...
So - does my approach of storing the actual DML as-it-happens suck pondwater, or is there a better solution???
"does my approach of storing the actual DML as-it-happens suck pondwater?" Yes..
Strict ordering of the DML on your DB1 does not really exist. Multiple processes, muiltiple cores, things essentially happening at the essentially the same time.
And the DML, even when it happens sequentially doesn't act like it. Say the following two update statements run in seperate processes with seperate transactions, where the update in transaction 2 starts before transaction 1 commits:
update table_a set col_a = 10 where col_b = 'A' -- transaction 1
update table_a set col_c = 'Error' where col_a = 10 -- transaction 2
Since the changes made in the first transaction are not visibible to the second transaction, the rows changed by the second transaction will not include those of the first. But if you manage to capture the DML and replay it sequentially, transaction 1's changes will be visible, so transaction 2's changes will be different. (See pages 40 and 41 of Tom Kyte's Expert Oracle Database Architecture Second Edition.)
Hopefully you are using bind variables, so the DML by itself wouldn't be meaningful: update table_a set col_a = :col_a where id = :id Now what? Ok, so you want the DML with it's variable bindings.
Do you use sequences? If so, the next_val will not stay in synch between DB1 and DB2. (For example, instance failures can cause lost values, are both systems going to fail at the same time?) And if you are dealing with RAC, where the next_val varies depending on node, forget it.
I would start by investigating Oracle's replication.
I had a situation where I needed to move metadata/configuration changes (stored in a handful of tables) from a development environment to a production environment once tested. Something like Goldengate is the product to use for this but this can be costly and complicated to set up and administer.
The following procedure generates a trigger and attaches it to a table that needs the DML saved. The trigger re-creates the DML and in the following case saves it to an audit table - its up to you what you do with it. You can use the statements saved to the audit table to replay changes from a given point in time (cut and paste or develop a procedure to apply them to the target).
Hope you find this useful.
procedure gen_trigger( p_tname in varchar2 )
is
l_theCursor integer default dbms_sql.open_cursor;
l_query varchar2(1000) default 'select * from ' || p_tname;
l_colCnt number := 0;
l_descTbl dbms_sql.desc_tab;
trg varchar(32767) := null;
expr varchar(32767) := null;
cmd varchar(32767) := null;
begin
dbms_sql.parse( l_theCursor, l_query, dbms_sql.native );
dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
trg := q'#
create or replace trigger <%TABLE_NAME%>_audit
after insert or update or delete on <%TABLE_NAME%> for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := <%INSERT_COMMAND%>;
end if;
if updating then
command := <%UPDATE_COMMAND%>;
end if;
if deleting then
command := <%DELETE_COMMAND%>;
end if;
insert into x_audit values (systimestamp, command);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
#';
-- Create the insert command
cmd := q'#'insert into <%TABLE_NAME%> (<%INSERT_COLS%>) values ('||<%INSERT_VAL%>||')'#';
-- columns clause
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || ',';
end if;
expr := expr || l_descTbl(i).col_name;
end loop;
cmd := replace(cmd,'<%INSERT_COLS%>',expr);
-- values clause
expr := null;
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || 'qs||:new.' || l_descTbl(i).col_name || '||qe';
end loop;
cmd := replace(cmd,'<%INSERT_VAL%>',expr);
trg := replace(trg,'<%INSERT_COMMAND%>',cmd);
-- create the update command
-- set clause
expr := null;
cmd := q'#'update <%TABLE_NAME%> set '||<%UPDATE_COLS%>||' where '||<%WHERE_CLAUSE%>#';
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || q'#'#' || l_descTbl(i).col_name || q'# = '||#'|| 'qs||:new.'||l_descTbl(i).col_name || '||qe';
end loop;
null;
cmd := replace(cmd,'<%UPDATE_COLS%>',expr);
trg := replace(trg,'<%UPDATE_COMMAND%>',cmd);
-- create the delete command
expr := null;
cmd := q'#'delete <%TABLE_NAME%> where '||<%WHERE_CLAUSE%>#';
trg := replace(trg,'<%DELETE_COMMAND%>',cmd);
-- where clause using primary key columns (used by update and delete)
expr := null;
for pk in (SELECT column_name FROM all_cons_columns WHERE constraint_name = (
SELECT constraint_name FROM user_constraints
WHERE UPPER(table_name) = UPPER(p_tname) AND CONSTRAINT_TYPE = 'P'
)) loop
if expr is not null then
expr := expr || q'#|| ' and '||#';
end if;
expr := expr || q'#'#' || pk.column_name || q'# = '||#'|| 'qs||:old.'|| pk.column_name || '||qe';
end loop;
if expr is null then -- must have a primary key
raise_application_error(-20000,'The table must have a primary key defined');
end if;
trg := replace(trg,'<%WHERE_CLAUSE%>',expr);
trg := replace(trg,'<%TABLE_NAME%>',p_tname);
execute immediate trg;
null;
exception
when others then
execute immediate 'alter session set nls_date_format=''YYYY/MM/DD'' ';
raise;
end;
/* Example
create table t1 (
col1 varchar2(100),
col2 number,
col3 date,
constraint pk_t1 primary key (col1)
)
/
BEGIN
GEN_TRIGGER('T1');
END;
/
-- Trigger generated ....
create or replace trigger t1_audit after
insert or
update or
delete on t1 for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := 'insert into T1 (COL1,COL2,COL3) values ('||qs||:new.col1||qe||','||qs||:new.col2||qe||','||qs||:new.col3||qe||')';
end if;
if updating then
command := 'update T1 set '||'COL1 = '||qs||:new.col1||qe||','||'COL2 = '||qs||:new.col2||qe||','||'COL3 = '||qs||:new.col3||qe||' where '||'COL1 = '||qs||:old.col1||qe;
end if;
if deleting then
command := 'delete T1 where '||'COL1 = '||qs||:old.col1||qe;
end if;
insert into x_audit values
(systimestamp, command
);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
*/
That function only works for 'event' triggers as discussed here.
You should look into Fine-Grained Auditing as a mechanism for this. Details here
When the trigger code runs don't you already know the dml that caused it to run?
CREATE OR REPLACE TRIGGER Print_salary_changes
BEFORE INSERT OR UPDATE ON Emp_tab
FOR EACH ROW
...
In this case it must have been an insert or an update statement on the emp_tab table.
To find out if it was an update or an insert
if inserting then
...
elsif updating then
...
end if;
The exact column values are available in the :old and :new pseudo-columns.

Resources