Trying to use a FORALL to insert data dynamically to a table specified to the procedure - oracle

I have the need to dynamic know the name of the table that has the same data structure as many others and I can pass in a generic associative array that is of the same structure. Here is the proc
PROCEDURE INSRT_INTER_TBL(P_TABLE_NAME IN VARCHAR2, P_DATA IN tt_type)
IS
BEGIN
FORALL i IN P_DATA.FIRST .. P_DATA.LAST
EXECUTE IMMEDIATE
'INSERT INTO ' || P_TABLE_NAME ||
' VALUES :1'
USING P_DATA(i);
END INSRT_INTER_TBL;
I am getting the following error
ORA-01006: bind variable does not exist
What am I missing here?
So I had to specify all the columns necessary to insert to the table out in the insert statement like:
PROCEDURE INSRT_INTER_TBL(P_TABLE_NAME IN VARCHAR2, P_DATA IN inter_invc_ln_item_type)
IS
BEGIN
FORALL i IN P_DATA.FIRST .. P_DATA.LAST
EXECUTE IMMEDIATE
'INSERT INTO ' || P_TABLE_NAME || ' (ITEM_PK, pk, units, amt) ' ||
' VALUES (:P_INVC_LN_ITEM_PK, :PK, :UNITS, :AMT)'
USING IN P_DATA(i).item_pk, P_DATA(i).pk, P_DATA(i).units, P_DATA(i).amt;
END INSRT_INTER_TBL;

The TABLE operator works better than a FORALL here. It uses less code and probably skips some SQL-to-PL/SQL context switches.
--Simple record and table type.
create or replace type tt_rec is object
(
a number,
b number
);
create or replace type tt_type is table of tt_rec;
--Sample schema that will hold results.
create table test1(a number, b number);
--PL/SQL block that inserts TT_TYPE into a table.
declare
p_table_name varchar2(100) := 'test1';
p_data tt_type := tt_type(tt_rec(1,1), tt_rec(2,2));
begin
execute immediate
'
insert into '||p_table_name||'
select * from table(:p_data)
'
using p_data;
commit;
end;
/
You can run the above code in this SQL Fiddle.

Try VALUES (:1) i.e. have brackets around :1

Related

Procedure Results with Variable Column Alias - PLSQL/Oracle

I've got this procedure working in TOAD/PLSQL but, would like the alias for the first column to be set to the field_name argument passed to the procedure. I think I need to build the query as a string like,
query := 'Select 1 as ' || field_name || ' From Dual';
But am not getting it right. Is what I have in mind possible?
Thanks and the working code I'm trying to modify is below.
Create or Replace Procedure Delete_Me(field_name NVarChar2)
as
result_set sys_refcursor;
BEGIN
open result_set for
Select
Elapsed_Time((Select Start_Time From Temp_Time1)) as field_name
,To_Char(SysDate, 'HH12:MI:SS AM') as Time_Completed
,Elapsed_Time((Select Start_Time From Temp_Time0)) as Process_So_Far
From
Dual;
DBMS_SQL.RETURN_RESULT(result_set);
End;
After comment:
I pass the procedure a string and its valued is placed in, "field_name." I would like the alias of the first column to adopt the value of field_name. So if I call the procedure thusly:
BEGIN
DeleteMe('Random_Column_Name');
END;
The first column name would be called, "Random_Column Name." If I called the procedure this way:
BEGIN
DeleteMe('Different_Column_Name');
END;
The first column would be names, "Different_Column_Name."
After Dmitry's second comment:
It doesn't mean anything. It's an example of what I've tried and failed to get to work.
I understand that you need to make a dynamic query, the way to do that is sometihing like this:
DECLARE
TYPE ty_refcur IS REF CURSOR;
c_mycur ty_refcur;
whatever varchar2(200) := 'whatever';
my_query varchar2(500);
BEGIN
my_query := 'Select ''hello''as '||whatever||' from dual';
OPEN c_mycur FOR my_query;
--whatever you want to do
CLOSE c_mycur;
END;
Here's what I finally came up with. Stanimir helped me understand how variables are used. Thanks for that!
Create or Replace Procedure Report_Elapsed_Time(field_name0 NVarChar2,
field_name1 NVarChar2)
as
result_set sys_refcursor;
query VarChar2(30000);
BEGIN
query := 'Select
Elapsed_Time((Select Start_Time From Temp_Time1)) as ' || Replace(field_name0, ' ', '_') || '
,To_Char(SysDate, ''HH12:MI:SS AM'') as Time_Completed
,Elapsed_Time((Select Start_Time From Temp_Time0)) as ' || Replace(field_name1, ' ', '_') || '
From
Dual';
open result_set for query;
DBMS_SQL.RETURN_RESULT(result_set);
End;

How to access and query objects passed as parameter to a procedure while converting from Oracle to postgresql

I have a procedure in Oracle that I need to convert to Postgresql and need help on it. It paases a collection of objects in a procedure.The procedure then checks if each object is present in a database table or not and if present it gives a message that , that specific element is found/present. if some element that is paassed to the procedure is not present in the table, the procedure just doesnt do anything. I have to write equivalent of that in postgresql. I think the heart of the issue is this statement:
SELECT COUNT (*)
INTO v_cnt
FROM **TABLE (p_cust_tab_type_i)** pt
WHERE pt.ssn = cc.ssn;
In Oracle a collection can be treated as a table and one can query it but I dont know how to do that in postgresql. The code to create the table, add data, create the procedure, call the procedure by passing the collection (3 objects) and output of that is posted below. Can someone suggest how this can be done in postgresql?
Following the oracle related code and details:
--create table
create table temp_n_tab1
(ssn number,
fname varchar2(20),
lname varchar2(20),
items varchar2(100));
/
--add data
insert into temp_n_tab1 values (1,'f1','l1','i1');
--SKIP no. ssn no. 2 intentionally..
insert into temp_n_tab1 values (3,'f3','l3','i3');
insert into temp_n_tab1 values (4,'f4','l4','i4');
insert into temp_n_tab1 values (5,'f5','l5','i5');
insert into temp_n_tab1 values (6,'f6','l6','i6');
commit;
--create procedure
SET SERVEROUTPUT ON
CREATE OR REPLACE PROCEDURE temp_n_proc (
p_cust_tab_type_i IN temp_n_customer_tab_type)
IS
t_cust_tab_type_i temp_n_customer_tab_type;
v_cnt NUMBER;
v_ssn temp_n_tab1.ssn%TYPE;
CURSOR c
IS
SELECT ssn
FROM temp_n_tab1
ORDER BY 1;
BEGIN
--t_cust_tab_type_i := p_cust_tab_type_i();
FOR cc IN c
LOOP
SELECT COUNT (*)
INTO v_cnt
FROM TABLE (p_cust_tab_type_i) pt
WHERE pt.ssn = cc.ssn;
IF (v_cnt > 0)
THEN
DBMS_OUTPUT.put_line (
'The array element '
|| TO_CHAR (cc.ssn)
|| ' exists in the table.');
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE (SQLERRM);
END;
/
--caller proc
SET SERVEROUTPUT ON
declare
array temp_n_customer_tab_type := temp_n_customer_tab_type();
begin
for i in 1 .. 3
loop
array.extend;
array(i) := temp_n_cust_header_type( i, 'name ' || i, 'lname ' || i,i*i*i*i );
end loop;
temp_n_proc( array );
end;
/
caller proc output:
The array element 1 exists in the table.
The array element 3 exists in the table.
When you create a table in Postgres, a type with the same name is also created. So you can simply pass an array of the table's type as a parameter to the function.
Inside the function you can then use unnest() to treat the array like a table.
The following is the closest match to your original Oracle code:
create function temp_n_proc(p_cust_tab_type_i temp_n_tab1[])
returns void
as
$$
declare
l_rec record;
l_msg text;
l_count integer;
BEGIN
for l_rec in select t1.ssn
from temp_n_tab1 t1
loop
select count(*)
into l_count
from unnest(p_cust_tab_type_i) as t
where t.ssn = l_rec.ssn;
if l_count > 0 then
raise notice 'The array element % exist in the table', l_rec.ssn;
end if;
end loop;
END;
$$
language plpgsql;
The row-by-row processing is not a good idea to begin with (neither in Postgres, nor in Oracle). It would be a lot more efficient to get the existing elements in a single query:
create function temp_n_proc(p_cust_tab_type_i temp_n_tab1[])
returns void
as
$$
declare
l_rec record;
l_msg text;
BEGIN
for l_rec in select t1.ssn
from temp_n_tab1 t1
where t1.ssn in (select t.ssn
from unnest(p_cust_tab_type_i) as t)
loop
raise notice 'The array element % exist in the table', l_rec.ssn;
end loop;
return;
END;
$$
language plpgsql;
You can call the function like this:
select temp_n_proc(array[row(1,'f1','l1','i1'),
row(2,'f2','l2','i2'),
row(3,'f3','l3','i3')
]::temp_n_tab1[]);
However a more "Postgres" like and much more efficient way would be to not use PL/pgSQL for this, but create a simple SQL function that returns the messages as a result:
create or replace function temp_n_proc(p_cust_tab_type_i temp_n_tab1[])
returns table(message text)
as
$$
select format('The array element %s exist in the table', t1.ssn)
from temp_n_tab1 t1
where t1.ssn in (select t.ssn
from unnest(p_cust_tab_type_i) as t)
$$
language sql;
This returns the output of the function as a result rather than using the clumsy raise notice.
You can use it like this:
select *
from temp_n_proc(array[row(1,'f1','l1','i1'),
row(2,'f2','l2','i2'),
row(3,'f3','l3','i3')
]::temp_n_tab1[]);

Oracle cursor with variable columns/tables/criteria

I need to open a cursor while table name, columns and where clause are varying. The table name etc will be passed as parameter. For example
CURSOR batch_cur
IS
SELECT a.col_1, b.col_1
FROM table_1 a inner join table_2 b
ON a.col_2 = b.col_2
WHERE a.col_3 = 123
Here, projected columns, table names, join criteria and where clause will be passed as parameters. Once opened, i need to loop through and process each fetched record.
You need to use dynamic SQL something like this:
procedure dynamic_proc
( p_table_1 varchar2
, p_table_2 varchar2
, p_value number
)
is
batch_cur sys_refcursor;
begin
open batch_cur for
'select a.col_1, b.col_1
from ' || p_table_1 || ' a inner join || ' p_table_2 || ' b
on a.col_2 = b.col_2
where a.col_3 = :bind_value1';
using p_value;
-- Now fetch data from batch_cur...
end;
Note the use of a bind variable for the data value - very important if you will re-use this many times with different values.
From your question i guess you need a dynamic cursor. Oracle provides REFCURSOR for dynamic sql statements. Since your query will be built dynamically you need a refcursor to do that.
create procedure SP_REF_CHECK(v_col1 number,v_col2 date,v_tab1 number,v_var1 char,v_var2 varchar2)
is
Ref_cur is REF CURSOR;
My_cur Ref_cur;
My_type Table_name%rowtype;
stmt varchar2(500);
begin
stmt:='select :1,:2 from :3 where :4=:5';
open My_cur for stmt using v_col1,v_col2,v_tab1,v_var1,v_var2;
loop
fetch My_cur into My_type;
//do some processing //
exit when My_cur%notfound;
end loop;
close My_cur;
end;
Check this link for more http://docs.oracle.com/cd/B10500_01/appdev.920/a96624/11_dynam.htm

Using OLD and NEW object for dynamic operations inside trigger

I want to know whether I can use the OLD and NEW objects for dynamic operations inside trigger.
What I am looking for is something like this :-
ABC is a table for which I need to write Trigger.
TracK_Table maintains list of columns for table which need to be tracked (logged).
f_log is a function that inserts changes in data into a tracking(log) table.
CREATE OR REPLACE TRIGGER trg_TRACK
AFTER INSERT OR UPDATE OR DELETE ON ABC
FOR EACH ROW
declare
v_old_val varchar2(1000);
v_new_val varchar2(1000);
n_ret int;
n_id varchar(50);
cursor cur_col is
SELECT COLUMN_NAME,
TABLE_name
FROM track_TABLE
WHERE upper(TABLE_NAME) = upper('ABC')
AND exists (select cname
from col
where UPPER(tname) =upper('ABC')
and upper(cname)=upper(COLUMN_NAME))
AND upper(allow) = 'Y';
begin
n_id:= :old.id;
for i_get_col in c_get_col
loop
execute immediate
'begin
:v_old_val:= select '||i_get_col.column_name ||'
from '||:old ||'
where id = '||n_id ||';
end;' using out v_old_val;
execute immediate
'begin
:v_new_val:= select '||i_get_col.column_name ||'
from '||:new ||'
where id = '||n_id ||';
end;' using out v_new_val;
n_ret := f_log(n_id,i_get_col.column_name,v_old_val,v_new_val);
end loop;
end;
/
One Option: Push the logic to check if a column is being tracke into the f_log procedure and then pass across all of the columns.
For example, if your track_Table holds (table_name, column_name, allow) values for each column that you want to trackm then something like this
CREATE OF REPLACE PROCEDURE f_log( p_id varchar2
,p_table_name varchar2
,p_column_name varchar2
,p_old_val varchar2
,p_new_val varchar2)
as
l_exists number;
cursor chk_column_track IS
SELECT 1
FROM track_TABLE
WHERE upper(TABLE_NAME) = upper(p_table_name)
AND UPPER(column_name) = upper(p_column_name)
AND upper(allow) = 'Y';
begin
open chk_column_track;
fetch chk_column_track into l_exists;
if chk_column_track%found then
--do the insert here
end if;
close chk_column_track;
end;
/
CREATE OR REPLACE TRIGGER trg_TRACK
AFTER INSERT OR UPDATE OR DELETE ON ABC
FOR EACH ROW
DECLARE
n_id varchar(50);
BEGIN
n_id := NVL(:old.id, :new.id);
-- send all of the values to f_log and have it decide whether to save them
f_log(:old.id,'COL1',:old.col1,:new.col1);
f_log(:old.id,'COL2',:old.col2,:new.col2);
f_log(:old.id,'COL3',:old.col3,:new.col3);
...
END;
And for goodness sake, upper-case the values in your track_table on insert so that you don't have to UPPER() the stored values thus making any index on those values useless!
Now, this will chew up some resources checking each column name on each operation, but if you are not running high-volumes then it might be manageable.
Otherwise you will need a more elegant solution. Like leveraging the power of collections and the TABLE() clause to do the track_table lookup in a bulk operation. Bear in mind that I am away from my database at the moment, so I have not test-compiled this code.
CREATE OR REPLACE TYPE t_audit_row AS OBJECT (
p_table_name varchar2(30)
,p_column_name varchar2(30)
,p_id varchar2(50)
,p_old_val varchar2(2000)
,p_new_val varchar2(2000)
);
CREATE OR REPLACE TYPE t_audit_row_table AS TABLE OF t_audit_row;
CREATE OR REPLACE PROCEDURE f_log (p_audit_row_table t_audit_Row_table)
AS
begin
-- see how we can match the contents of the collection to the values
-- in the table all in one query. the insert is just my way of showing
-- how this can be done in one bulk operation. Alternately you could make
-- the select a cursor and loop through the rows to process them individually.
insert into my_audit_log (table_name, column_name, id, old_val, new_val)
select p_table_name
,p_column_name
,p_id
,p_old_val
,p_new_val
FROM track_TABLE TT
,table(p_audit_row_table) art
WHERE tt.TABLE_NAME = art.p_table_name
AND tt.column_name = art.p_column_name
AND tt.allow = 'Y';
end;
/
CREATE OR REPLACE TRIGGER trg_TRACK
AFTER INSERT OR UPDATE OR DELETE ON ABC
FOR EACH ROW
DECLARE
l_id varchar(50);
l_audit_table t_audit_row_table;
BEGIN
l_id := NVL(:old.id, :new.id);
-- send all of the values to f_log and have it decide whether to save them
l_audit_table := t_audit_row_table (
t_audit_row ('ABC','COL1',l_id, :old.col1, :new.col1)
,t_audit_row ('ABC','COL2',l_id, :old.col2, :new.col2)
,t_audit_row ('ABC','COL3',l_id, :old.col3, :new.col3)
,...
,t_audit_row ('ABC','COLn',l_id, :old.coln, :new.coln)
);
f_log(l_audit_table);
end;
/
No, you cannot access the OLD and NEW pseudo-variables dynamically. What you can do is use your track_table data in a script or procedure to generate static triggers that look like:
CREATE OR REPLACE TRIGGER trg_TRACK
AFTER INSERT OR UPDATE OR DELETE ON ABC
FOR EACH ROW
DECLARE
n_id varchar(50);
BEGIN
n_id := NVL(:old.id, :new.id);
f_log(:old.id,'COL1',:old.col1,:new.col1);
f_log(:old.id,'COL3',:old.col3,:new.col3);
...
END;
So if the data in the TRACK_CHANGES table changes you just have to re-generate the triggers.

Searching data from table by passing table name as a parameter in PL/SQL

Is this stored procedure in oracle is correct for searching data from table by passing table name as a parameter
CREATE OR REPLACE PROCEDURE bank_search_sp
(
p_tablename IN VARCHAR2,
p_searchname IN VARCHAR2,
p_bankcode OUT VARCHAR2,
p_bankname OUT VARCHAR2,
p_dist_code OUT NUMBER
)
AS
v_tem VARCHAR2(5000);
BEGIN
v_tem := 'SELECT bankcode,bankname,dist_code FROM ' || UPPER (p_tablename) || '
WHERE bankname LIKE '''|| p_searchname||'''';
EXECUTE IMMEDIATE v_tem
INTO p_bankcode,p_bankname,p_dist_code
USING p_searchname ;
END bank_search_sp;
If you need this procedure, then I guess that you have several tables with the columns bankcode, bankname and dist_code. If this is true, then try to normalize your model if possible.
The USING term is the correct approach, but you have to use the parameter in your query.
To avoid SQL injection, you could use dbms_assert.sql_object_name.
This should work for you:
v_tem := 'SELECT bankcode, bankname, dist_code FROM '
|| dbms_assert.sql_object_name(p_tablename)
|| ' WHERE bankname LIKE :1';
Your EXECUTE IMMEDIATE will throw an exception when finding no row or more than one row, so using LIKE might not be a good idea.
Questions that you should ask yourself:
Is the model properly normalized?
Do you really need to use LIKE, or is = what you want?
If you want to use LIKE, how should the program deal with NO_DATA_FOUND / TOO_MANY_ROWS exceptions?

Resources