Execute Immediate Loop PL/SQL - oracle

Can I use "execute immediate" with for cycle?
I need of generate a combinations of values in pl/sql.
About this I'm thinking of call a series of nested dynamics for cycle.
Example:
column 1 have this values: A,B
column 2 have this values: C,D,E
I would like to generate this combination:
AC / AD/ AE/ BC / BD /BE
I'm thinking to obtain this like:
for i in 1..count.column1
for j in 1..count.column2
dbms_output.put_line(column1.value(i)||'-'||column1.value(j));
end loop;
end loop;
Due to I don't know the number of column (variable),
Can I use a execute immediate?
declare
sql_stmt varchar2(200);
begin
sql_stmt := 'for i in 1..count.column1 for j in 1..count.column2 dbms_output.put_line(column1.value(i)||'-'||column1.value(j)); end loop; end loop';
execute immediate sql_stmt;
end;
But I have error ORA-06512.
How can i make this? :)
Thank you in advance for your suggestion!

If you really want to do this slowly in PL/SQL using nested loops, you don't need anything to be dynamic. You'd just want
for i in (select distinct column1
from your_table)
loop
for j in (select distinct column2
from your_table)
loop
dbms_output.put_line( i.column1 || j.column2 );
end loop;
end loop;
In the vast majority of cases, though, you'd be better off doing this in SQL
with col1 as (
select distinct col1 val
from your_table
),
col2 as (
select distinct col2 val
from your_table
)
select col1.val || col2.val
from col1
cross join col2

I think you can easily get the combinations using this select statement.
SELECT DISTINCT t1.col1, t2.col2
FROM table_name t1, table_name t2;

Related

Oracle Loop - declaration

I want to use LOOP to go trough all partitions in a table to change some data per partitions.
I am starting like:
BEGIN
FOR n in (here is the select statement which chooses the partition names)
LOOP
UPDATE table_name
PARTITION (n)
SET
here are columns to change with new values;
COMMIT;
END LOOP;
END;
I get error ORA 02149 and ORA 06512 that partition does not exist.
Is it related to some declaration? How I should solve it?
Why bother with a partition name? What benefit do you expect? Simply
update table_name set
col1 = ...,
col2 = ...
where condition_goes_here --> this condition will "determine" the partition
You can use execute immediate and user_tab_partitions data dictionary view together as
Begin
for c in ( select *
from user_tab_partitions p
where p.table_name = 'TABLE_NAME'
order by p.partition_position )
loop
execute immediate 'update '||c.table_name||' partition('||c.partition_name||')
set col1 = ''xYz'' ';
commit;
end loop;
End;

Identify and retrieve values of all varchar columns in a Oracle database

I have the following query that gives me a result set of all tables and columns in my Oracle database of VARCHAR columns:
SELECT ATC.OWNER, ATC.TABLE_NAME, ATC.COLUMN_NAME
FROM all_tab_columns ATC
WHERE DATA_TYPE LIKE '%VARCHAR%'
To this I want to add a 4th column that displays the value of ATC.COLUMN_NAME. Is there an easy way of doing this?
I thought of doing a join to a SQL statement that loops through ATC.COLUMN_NAME and outputting the value. The join would be done on the table name.
I don't know if I'm complicating it and I can't think of the SQL. I've tried declaring the above statement in a variable and then using a CTE to interrogate it but I would still need to loop through the table_name and column_name values.
Is there a simpler way?
Edit: Sample data
You need to use dynamic SQL. this is a proof of concept, it will not scale well when run against a large database.
declare
stmt varchar2(32767);
val varchar2(4000);
rc sys_refcursor;
begin
for r in ( SELECT ATC.OWNER, ATC.TABLE_NAME, ATC.COLUMN_NAME
FROM all_tab_columns ATC
WHERE DATA_TYPE LIKE '%VARCHAR%' )
loop
stmt := ' select distinct '|| r.column_name ||
' from '|| r.owner||'.'||r.table_name;
open rc for stmt;
loop
fetch rc in val;
exit when rc%notfound;
dbms_output.put_line ( r.owner||'.'||r.table_name ||'.'|| r.column_name
||': '|| val );
end loop;
end loop;
end;

How to pass cursor values into variable?

I am trying to read values from two column1, column2 from table1 using cursor. Then I want to pass these values to another cursor or select into statement
so my PL/Sql script will use the values of these two columns to get data from another table called table2
Is this possible? And what's the best and fastest way to do something like that?
Thanks :)
Yes, it's possible to pass cursor values into variables. Just use fetch <cursor_name> into <variable_list> to get one more row from a cursor. After that you can use the variables in where clause of some select into statement. E.g.,
declare
cursor c1 is select col1, col2 from table1;
l_col1 table1.col1%type;
l_col2 table1.col2%type;
l_col3 table2.col3%type;
begin
open c1;
loop
fetch c1 into l_col1, l_col2;
exit when c1%notfound;
select col3
into l_col3
from table2 t
where t.col1 = l_col1 --Assuming there is exactly one row in table2
and t.col2 = l_col2; --satisfying these conditions
end loop;
close c1;
end;
If you use an implicit cursor, then it's even simpler:
declare
l_col3 table2.col3%type;
begin
for i in (select col1, col2 from table1)
loop
select col3
into l_col3
from table2 t
where t.col1 = i.col1 --Assuming there is exactly one row in table2
and t.col2 = i.col2; --satisfying these conditions
end loop;
end;
In these examples, it's more efficient to use a subquery
begin
for i in (select t1.col1
, t1.col2
, (select t2.col3
from table2 t2
where t2.col1 = t1.col1 --Assuming there is atmost one such
and t2.col2 = t1.col2 --row in table2
) col3
from table1 t1)
loop
...
end loop;
end;

refactoring large cursor queries by splitting into multiple cursors

Another PL/SQL refactoring question!
I have several cursors that are of the general simplified form:
cursor_1 is
with X as (select col1, col2 from TAB where col1 = '1'),
Y as (select col1, col2 from TAB where col2 = '3'),
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
cursor_2 is
with X as (select col1, col2 from TAB where col1 = '7' and col2 = '9' and col3 = 'TEST'),
Y as (select col1, col2 from TAB where col3 = '6'),
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
cursor_2 is
with X as (select col1, col2 from TAB where col1 IS NULL ),
Y as (select col1, col2 from TAB where col2 IS NOT NULL ),
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
...
begin
for r in cursor_1 loop
print_report_results(r);
end loop;
for r in cursor_2 loop
print_report_results(r);
end loop;
...
end;
Basically, all of these cursors (there's more than 3) are the same summary/reporting queries. The difference is in the factored subqueries. There are always 2 factored subqueries, "X" and "Y", and they always select the same columns to feed into the main reporting query.
The problem is that the main reporting query is VERY large, about 70 lines. This itself isn't so bad, but it was copy-pasted for ALL of the reporting queries (I think there's over a dozen).
Since the only difference is in the factored subqueries (and they all return the same columns, it's really just a difference in the tables they select from and their conditions) I was hoping to find a way to refactor all this so that there is ONE query for the giant report and smaller ones for the various factored subqueries so that when changes are made to the way the report is done, I only have to do it in one place, not a dozen. Not to mention a much easier-to-navigate (and read) file!
I just don't know how to properly refactor something like this. I was thinking pipelined functions? I'm not sure they're appropriate for this though, or if there's a simpler way...
On the other hand, I also wonder if performance would be significantly worse by splitting out the reporting query. Performance (speed) is an issue for this system. I'd rather not introduce changes for developer convenience if it adds significant execution time.
I guess what I'd ultimately like is something that looks sort of like this (I'm just not sure how to do this so that it will actually compile):
cursor main_report_cursor (in_X, in_Y) is
with X as (select * from in_X),
Y as (select * from in_Y)
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
cursor x_1 is
select col1, col2 from TAB where col1 = '1';
cursor y_1 is
select col1, col2 from TAB where col2 = '3'
...
begin
for r in main_report_cursor(x_1,y_1) loop
print_report_results(r);
end loop;
for r in main_report_cursor(x_2,y_2) loop
print_report_results(r);
end loop;
...
(Using Oracle 10g)
Use a pipelined function. For example:
drop table my_tab;
create table my_tab
(
col1 number,
col2 varchar2(10),
col3 char(1)
);
insert into my_tab values (1, 'One', 'X');
insert into my_tab values (1, 'One', 'Y');
insert into my_tab values (2, 'Two', 'X');
insert into my_tab values (2, 'Two', 'Y');
insert into my_tab values (3, 'Three', 'X');
insert into my_tab values (4, 'Four', 'Y');
commit;
-- define types
create or replace package refcur_pkg is
--type people_tab is table of people%rowtype;
type my_subquery_tab is table of my_tab%rowtype;
end refcur_pkg;
Create the function pipelined
-- create pipelined function
create or replace function get_tab_data(p_cur_num in number, p_cur_type in char)
return REFCUR_PKG.my_subquery_tab pipelined
IS
v_ret REFCUR_PKG.my_subquery_tab;
begin
if (p_cur_num = 1) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=1 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=1 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
elsif (p_cur_num = 2) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=2 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=2 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
end if;
return;
end;
MAIN procedure example
-- main procedure/usage
declare
cursor sel_cur1 is
with X as (select * from table(get_tab_data(1, 'x'))),
Y as (select * from table(get_tab_data(1, 'y')))
select X.col1, Y.col2 from X,Y where X.col1 = Y.col1;
begin
for rec in sel_cur1
loop
dbms_output.put_line(rec.col1 || ',' || rec.col2);
end loop;
end;
All of your various subqueries are reduced to a call to a single pipelined function, which determines the rows to return.
EDIT:
To combine all needed types and functions into 1 procedure, and also to use variables for subquery function parameters, I'm adding the following example:
create or replace procedure my_pipe
IS
-- define types
type my_subquery_tab is table of my_tab%rowtype;
type ref_cur_t is ref cursor;
v_ref_cur ref_cur_t;
-- define vars
v_with_sql varchar2(4000);
v_main_sql varchar2(32767);
v_x1 number;
v_x2 char;
v_y1 number;
v_y2 char;
v_col1 my_tab.col1%type;
v_col2 my_tab.col2%type;
-- define local functions/procs
function get_tab_data(p_cur_num in number, p_cur_type in char)
return my_subquery_tab pipelined
IS
v_ret my_subquery_tab;
begin
if (p_cur_num = 1) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=1 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=1 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
elsif (p_cur_num = 2) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=2 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=2 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
end if;
return;
end;
BEGIN
---------------------------------
-- Setup SQL for cursors
---------------------------------
-- this will have different parameter values for subqueries
v_with_sql := q'{
with X as (select * from table(get_tab_data(:x1, :x2))),
Y as (select * from table(get_tab_data(:y1, :y2)))
}';
-- this will stay the same for all cursors
v_main_sql := q'{
select X.col1, Y.col2 from X,Y where X.col1 = Y.col1
}';
---------------------------------
-- set initial subquery parameters
---------------------------------
v_x1 := 1;
v_x2 := 'x';
v_y1 := 1;
v_y2 := 'y';
open v_ref_cur for v_with_sql || v_main_sql using v_x1, v_x2, v_y1, v_y2;
loop
fetch v_ref_cur into v_col1, v_col2;
exit when v_ref_cur%notfound;
dbms_output.put_line(v_col1 || ',' || v_col2);
end loop;
close v_ref_cur;
---------------------------------
-- change subquery parameters
---------------------------------
v_x1 := 2;
v_x2 := 'x';
v_y1 := 2;
v_y2 := 'y';
open v_ref_cur for v_with_sql || v_main_sql using v_x1, v_x2, v_y1, v_y2;
loop
fetch v_ref_cur into v_col1, v_col2;
exit when v_ref_cur%notfound;
dbms_output.put_line(v_col1 || ',' || v_col2);
end loop;
close v_ref_cur;
end;
Note the benefit now is that even if you have many different cursors, you only need to define the main query and subquery SQL once. After that, you're just changing variables.
Cheers
--Create views that will be replaced by common table expressions later.
--The column names have to be the same, the actual content doesn't matter.
create or replace view x as select 'wrong' col1, 'wrong' col2 from dual;
create or replace view y as select 'wrong' col1, 'wrong' col2 from dual;
--Put the repetitive logic in one view
create or replace view main_select as
select count(x.col1) total, x.col2
from X inner join Y on x.col1 = y.col1
group by rollup (x.col1);
--Just querying the view produces the wrong results
select * from main_select;
--But when you add the common table expressions X and Y they override
--the dummy views and produce the real results.
declare
cursor cursor_1 is
with X as (select 'right' col1, 'right' col2 from dual),
Y as (select 'right' col1, 'right' col2 from dual)
select total, col2 from main_select;
--... repeat for each cursor, just replace X and Y as necessary
begin
for r in cursor_1 loop
dbms_output.put_line(r.col2);
end loop;
null;
end;
/
This solution is a little weirder than the pipelined approach, and requires 3 new objects for the views, but it will probably run faster
since there is less context switching between SQL and PL/SQL.
One possibility you could consider is using 2 Global Temporary Tables (GTTs) for X and Y. Then you just need one cursor, but you have to clear and re-populate the 2 GTTs several times - and if data volumes are large you may want to get optimiser stats on the GTTs each time too.
This is the sort of thing I mean:
cursor_gtt is
select count(X.col1), ...
from GTT_X inner join GTT_Y on...
group by rollup (X.col1, ...
begin
insert into gtt_x select col1, col2 from TAB where col1 = '1';
insert into gtt_y select col1, col2 from TAB where col2 = '3';
-- maybe get stats for gtt_x and gtt_y here
for r in cursor_gtt loop
print_report_results(r);
end loop;
delete gtt_x;
delete gtt_y;
insert into gtt_x select col1, col2 from TAB where col1 = '7' and col2 = '9' and col3 = 'TEST';
insert into gtt_y select col1, col2 from TAB where col3 = '6'
-- maybe get stats for gtt_x and gtt_y here
for r in cursor_gtt loop
print_report_results(r);
end loop;
...
end;
So the same 2 GTTs are re-populated and the same cursor is used each time.
What about creating a view for the main query? That pretties up your code and centralizes the main query to boot.

SELECT DISTINCT CLOB_COLUMN FROM TABLE;

I would like to find the distinct CLOB values that can assume the column called CLOB_COLUMN (of type CLOB) contained in the table called COPIA.
I have selected a PROCEDURAL WAY to solve this problem, but I would prefer to give a simple SELECT as the following: SELECT DISTINCT CLOB_COLUMN FROM TABLE avoiding the error "ORA-00932: inconsistent datatypes: expected - got CLOB"
How can I achieve this?
Thank you in advance for your kind cooperation. This is the procedural way I've thought:
-- Find the distinct CLOB values that can assume the column called CLOB_COLUMN (of type CLOB)
-- contained in the table called COPIA
-- Before the execution of the following PL/SQL script, the CLOB values (including duplicates)
-- are contained in the source table, called S1
-- At the end of the excecution of the PL/SQL script, the distinct values of the column called CLOB_COLUMN
-- can be find in the target table called S2
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE S1 DROP STORAGE';
EXECUTE IMMEDIATE 'DROP TABLE S1 CASCADE CONSTRAINTS PURGE';
EXCEPTION
WHEN OTHERS
THEN
BEGIN
NULL;
END;
END;
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE S2 DROP STORAGE';
EXECUTE IMMEDIATE 'DROP TABLE S2 CASCADE CONSTRAINTS PURGE';
EXCEPTION
WHEN OTHERS
THEN
BEGIN
NULL;
END;
END;
CREATE GLOBAL TEMPORARY TABLE S1
ON COMMIT PRESERVE ROWS
AS
SELECT CLOB_COLUMN FROM COPIA;
CREATE GLOBAL TEMPORARY TABLE S2
ON COMMIT PRESERVE ROWS
AS
SELECT *
FROM S1
WHERE 3 = 9;
BEGIN
DECLARE
CONTEGGIO NUMBER;
CURSOR C1
IS
SELECT CLOB_COLUMN FROM S1;
C1_REC C1%ROWTYPE;
BEGIN
FOR C1_REC IN C1
LOOP
-- How many records, in S2 table, are equal to c1_rec.clob_column?
SELECT COUNT (*)
INTO CONTEGGIO
FROM S2 BETA
WHERE DBMS_LOB.
COMPARE (BETA.CLOB_COLUMN,
C1_REC.CLOB_COLUMN) = 0;
-- If it does not exist, in S2, a record equal to c1_rec.clob_column,
-- insert c1_rec.clob_column in the table called S2
IF CONTEGGIO = 0
THEN
BEGIN
INSERT INTO S2
VALUES (C1_REC.CLOB_COLUMN);
COMMIT;
END;
END IF;
END LOOP;
END;
END;
If it is acceptable to truncate your field to 32767 characters this works:
select distinct dbms_lob.substr(FIELD_CLOB,32767) from Table1
You could compare the hashes of the CLOB to determine if they are different:
SELECT your_clob
FROM your_table
WHERE ROWID IN (SELECT MIN(ROWID)
FROM your_table
GROUP BY dbms_crypto.HASH(your_clob, dbms_crypto.HASH_SH1))
Edit:
The HASH function doesn't guarantee that there will be no collision. By design however, it is really unlikely that you will get any collision. Still, if the collision risk (<2^80?) is not acceptable, you could improve the query by comparing (with dbms_lob.compare) the subset of rows that have the same hashes.
add TO_CHAR after distinct keyword to convert CLOB to CHAR
SELECT DISTINCT TO_CHAR(CLOB_FIELD) from table1; //This will return distinct values in CLOB_FIELD
Use this approach. In table profile column content is NCLOB. I added the where clause to reduce the time it takes to run which is high,
with
r as (select rownum i, content from profile where package = 'intl'),
s as (select distinct (select min(i) from r where dbms_lob.compare(r.content, t.content) = 0) min_i from profile t where t.package = 'intl')
select (select content from r where r.i = s.min_i) content from s
;
It is not about to win any prizes for efficiency but should work.
select distinct DBMS_LOB.substr(column_name, 3000) from table_name;
If truncating the clob to the size of a varchar2 won't work, and you're worried about hash collisions, you can:
Add a row number to every row;
Use DBMS_lob.compare in a not exists subquery. Exclude duplicates (this means: compare = 0) with a higher rownum.
For example:
create table t (
c1 clob
);
insert into t values ( 'xxx' );
insert into t values ( 'xxx' );
insert into t values ( 'yyy' );
commit;
with rws as (
select row_number () over ( order by rowid ) rn,
t.*
from t
)
select c1 from rws r1
where not exists (
select * from rws r2
where dbms_lob.compare ( r1.c1, r2.c1 ) = 0
and r1.rn > r2.rn
);
C1
xxx
yyy
To bypass the oracle error, you have to do something like this :
SELECT CLOB_COLUMN FROM TABLE COPIA C1
WHERE C1.ID IN (SELECT DISTINCT C2.ID FROM COPIA C2 WHERE ....)
I know this is an old question but I believe I've figure out a better way to do what you are asking.
It is kind of like a cheat really...The idea behind it is that You can't do a DISTINCT of a Clob column but you can do a DISTINCT on a Listagg function of a Clob_Column...you just need to play with the partition clause of the Listagg function to make sure it will only return one value.
With that in mind...here is my solution.
SELECT DISTINCT listagg(clob_column,'| ') within GROUP (ORDER BY unique_id) over (PARTITION BY unique_id) clob_column
FROM copia;

Resources