PL/SQL issue concerning Frequent Itemset - oracle

I'm trying to build a PL/SQL application to mine frequent item sets out of a set of given data and I've run into a bit of a snag. My PL/SQL skills aren't as good as I'd like them to be, so perhaps one of you can help me understand this a bit better.
So to begin, I'm using the Oracle data mining procedure: *DBMS_FREQUENT_ITEMSET.FI_TRANSACTIONAL*
While reading the documentation, I came across the following example which I have manipulated to query over my data set:
CREATE OR REPLACE TYPE FI_VARCHAR_NT AS TABLE OF NUMBER;
/
CREATE TYPE fi_res AS OBJECT (
itemset FI_VARCHAR_NT,
support NUMBER,
length NUMBER,
total_tranx NUMBER
);
/
CREATE TYPE fi_coll AS TABLE OF fi_res;
/
create or replace
PROCEDURE freq_itemset_test is
cursor freqC is
SELECT itemset
FROM table(
CAST(DBMS_FREQUENT_ITEMSET.FI_TRANSACTIONAL(CURSOR(SELECT sale.customerid, sale.productid FROM Sale INNER JOIN Customer ON customer.customerid = sale.customerid WHERE customer.region = 'Canada' )
,0,2, 2, NULL, NULL) AS fi_coll));
coll_nt FI_VARCHAR_NT;
num_rows int;
num_itms int;
BEGIN
num_rows := 0;
num_itms := 0;
OPEN freqC;
LOOP
FETCH freqC INTO coll_nt;
EXIT WHEN freqC%NOTFOUND;
num_rows := num_rows + 1;
num_itms := num_itms + coll_nt.count;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Rows: ' || num_rows || ' Columns: ' || num_itms);
CLOSE freqC;
END;
My reasoning for using the Oracle FI_TRANSACTIONAL over straight SQL is that I will need to repeat this analysis for multiple dynamic values of K, so why reinvent the wheel? Ultimately, my goal is to reference each individual item sets returned by the procedure and return the set with the highest support based on some query logic. I will be incorporating this block of PL/SQL into another that basically changes the literal in the query from 'Canada' to multiple other regions based on the content of the data.
My question is: How can I actually get a programmatic reference on the data returned by the cursor (freqC)? Obviously I do not need to count the rows and columns, but that was part of the example. I'd like to print out the item sets with DBMS print line after I've found the most occurring item set. When I view this in a debugger, I see that each fetch of the cursor actually returns an item set (in this case, k=2, so two items). But how do I actually touch them programmatically? I'd like to grab the sets themselves as well as fi_res.support.
As always, thanks to everyone for sharing their brilliance!

You are fetching your data into a nested table. So to see the data in there, you would need to loop over the nested table:
FOR i IN coll_nt.FIRST .. coll_nt.LAST
LOOP
dbms_output.put_line(i||': '||coll_nt(i));
END LOOP;
For much more information on nested tables and other types of collections, see the presentation at:
http://www.toadworld.com/platforms/oracle/w/wiki/8253.everything-you-need-to-know-about-collections-but-were-afraid-to-ask.aspx

Related

Is there a hint to generate execution plan ignoring the existing one from shared pool?

Is there a hint to generate execution plan ignoring the existing one from the shared pool?
There is not a hint to create an execution plan that ignores plans in the shared pool. A more common way of phrasing this question is: how do I get Oracle to always perform a hard parse?
There are a few weird situations where this behavior is required. It would be helpful to fully explain your reason for needing this, as the solution varies depending why you need it.
Strange performance problem. Oracle performs some dynamic re-optimization of SQL statements after the first run, like adaptive cursor sharing and cardinality feedback. In the rare case when those features backfire you might want to disable them.
Dynamic query. You have a dynamic query that used Oracle data cartridge to fetch data in the parse step, but Oracle won't execute the parse step because the query looks static to Oracle.
Misunderstanding. Something has gone wrong and this is an XY problem.
Solutions
The simplest way to solve this problem are by using Thorsten Kettner's solution of changing the query each time.
If that's not an option, the second simplest solution is to flush the query from the shared pool, like this:
--This only works one node at a time.
begin
for statements in
(
select distinct address, hash_value
from gv$sql
where sql_id = '33t9pk44udr4x'
order by 1,2
) loop
sys.dbms_shared_pool.purge(statements.address||','||statements.hash_value, 'C');
end loop;
end;
/
If you have no control over the SQL, and need to fix the problem using a side-effect style solution, Jonathan Lewis and Randolf Geist have a solution using Virtual Private Database, that adds a unique predicate to each SQL statement on a specific table. You asked for something weird, here's a weird solution. Buckle up.
-- Create a random predicate for each query on a specific table.
create table hard_parse_test_rand as
select * from all_objects
where rownum <= 1000;
begin
dbms_stats.gather_table_stats(null, 'hard_parse_test_rand');
end;
/
create or replace package pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2;
end pkg_rls_force_hard_parse_rand;
/
create or replace package body pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2
is
s_predicate varchar2(100);
n_random pls_integer;
begin
n_random := round(dbms_random.value(1, 1000000));
-- s_predicate := '1 = 1';
s_predicate := to_char(n_random, 'TM') || ' = ' || to_char(n_random, 'TM');
-- s_predicate := 'object_type = ''TABLE''';
return s_predicate;
end force_hard_parse;
end pkg_rls_force_hard_parse_rand;
/
begin
DBMS_RLS.ADD_POLICY (USER, 'hard_parse_test_rand', 'hard_parse_policy', USER, 'pkg_rls_force_hard_parse_rand.force_hard_parse', 'select');
end;
/
alter system flush shared_pool;
You can see the hard-parsing in action by running the same query multiple times:
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
Now there are three entries in GV$SQL for each execution. There's some odd behavior in Virtual Private Database that parses the query multiple times, even though the final text looks the same.
select *
from gv$sql
where sql_text like '%hard_parse_test_rand%'
and sql_text not like '%quine%'
order by 1;
I think there is no hint indicating that Oracle shall find a new execution plan everytime it runs the query.
This is something we'd want for select * from mytable where is_active = :active, with is_active being 1 for very few rows and 0 for maybe billions of other rows. We'd want an index access for :active = 1 and a full table scan for :active = 0 then. Two different plans.
As far as I know, Oracle uses bind variable peeking in later versions, so with a look at the statistics it really comes up with different execution plans for different bind varibale content. But in older versions it did not, and thus we'd want some hint saying "make a new plan" there.
Oracle only re-used an execution plan for exactly the same query. It sufficed to add a mere blank to get a new plan. Hence a solution might be to generate the query everytime you want to run it with a random number included in a comment:
select /* 1234567 */ * from mytable where is_active = :active;
Or just don't use bind variables, if this is the problem you want to address:
select * from mytable where is_active = 0;
select * from mytable where is_active = 1;

Parameter for IN query oracle [duplicate]

This question already has an answer here:
Oracle: Dynamic query with IN clause using cursor
(1 answer)
Closed 8 years ago.
SELECT * FROM EMPLOYEE
WHERE EMP_NAME IN (:EMP_NAME);
This is my query and now the EMP_NAME parameter I would like to send it as a list of strings.
When I run this query in SQL developer it is asked to send the EMP_NAME as a parameter, Now I want to send 'Kiran','Joshi' (Basically, I want to fetch the details of the employee with employee name either Kiran or Joshi. How should I pass the value during the execution of the query?
It works when I use the value Kiran alone, but when I concatenate with any other string it won't work. Any pointers in this?
I tried the one below
'Kiran','Joshi'
The above way doesn't work as understood this is a single parameter it tries the employee with the name as 'Kiran',Joshi' which won't come. Understandable, but in order to achieve this thing, how can I go ahead?
Any help would be really appreciated.
Thanks to the people who helped me in solving this problem.
I could get the solution using the way proposed, below is the approach
SELECT * FROM EMPLOYEE WHERE EMP_NAME IN (&EMP_NAME)
I have tried in this way and following are the scenarios which I have tested and they are working fine.
Scenario 1:
To fetch details of only "Kiran", then in this case the value of EMP_NAME when sql developer prompts is given as Kiran. It worked.
Scenario 2:
To fetch details of either "Kiran" or "Joshi", then the value of EMP_NAME is sent as
Kiran','Joshi
It worked in this case also.
Thanks Kedarnath for helping me in achieving the solution :)
IN clause would be implicitly converted into multiple OR conditions.. and the limit is 1000.. Also query with bind variable means, the execution plan will be reused.. Supporting bind variables for IN clause will hence affect the bind variable's basic usage, and hence oracle limits it at syntax level itself.
Only way is like name in (:1,:2) and bind the other values..
for this, you might dynamic SQL constructing the in clause bind variables in a loop.
Other way is, calling a procedure or function(pl/sql)
DECLARE
v_mystring VARCHAR(50);
v_my_ref_cursor sys_refcursor;
in_string varchar2='''Kiran'',''Joshi''';
id2 varchar2(10):='123'; --- if some other value you have to compare
myrecord tablename%rowtype;
BEGIN
v_mystring := 'SELECT a.*... from tablename a where name= :id2 and
id in('||in_string||')';
OPEN v_my_ref_cursor FOR v_mystring USING id2;
LOOP
FETCH v_my_ref_cursor INTO myrecord;
EXIT WHEN v_my_ref_cursor%NOTFOUND;
..
-- your processing
END LOOP;
CLOSE v_my_ref_cursor;
END;
IN clause supports maximum of 1000 items. You can always use a table to join instead. That table might be a Global Temporary Table(GTT) whose data is visible to thats particular session.
Still you can use a nested table also for it(like PL/SQL table)
TABLE() will convert a PL/Sql table as a SQL understandable table object(an object actually)
A simple example of it below.
CREATE TYPE pr AS OBJECT
(pr NUMBER);
/
CREATE TYPE prList AS TABLE OF pr;
/
declare
myPrList prList := prList ();
cursor lc is
select *
from (select a.*
from yourtable a
TABLE(CAST(myPrList as prList)) my_list
where
a.pr = my_list.pr
order by a.pr desc) ;
rec lc%ROWTYPE;
BEGIN
/*Populate the Nested Table, with whatever collection you have */
myPrList := prList ( pr(91),
pr(80));
/*
Sample code: for populating from your TABLE OF NUMBER type
FOR I IN 1..your_input_array.COUNT
LOOP
myPrList.EXTEND;
myPrList(I) := pr(your_input_array(I));
END LOOP;
*/
open lc;
loop
FETCH lc into rec;
exit when lc%NOTFOUND; -- Your Exit WHEN condition should be checked afte FETCH iyself!
dbms_output.put_line(rec.pr);
end loop;
close lc;
END;
/

ORACLE PL SQL : Select all and process every records

I would like to have your advise how to implement the plsql. Below is the situation that i want to do..
select * from table A
loop - get each records from #1 step, and execute the store procedure, processMe(a.field1,a.field2,a.field3 || "test",a.field4);
i dont have any idea how to implement something like this. Below is sample parameter for processMe
processMe(
number_name IN VARCHAR,
location IN VARCHAR,
name_test IN VARCHAR,
gender IN VARCHAR )
Begin
select objId into obj_Id from tableUser where name = number_name ;
select locId into loc_Id from tableLoc where loc = location;
insert into tableOther(obj_id,loc_id,name_test,gender)
values (obj_Id ,loc_Id, name_test, gender)
End;
FOR rec IN (SELECT *
FROM table a)
LOOP
processMe( rec.field1,
rec.field2,
rec.field3 || 'test',
rec.field4 );
END LOOP;
does what you ask. You probably want to explicitly list the columns you actually want in the SELECT list rather than doing a SELECT * (particularly if there is an index on the four columns you actually want that could be used rather than doing a table scan or if there are columns you don't need that contain a large amount of data). Depending on the data volume, it would probably be more efficient if a version of processMe were defined that could accept collections rather than processing data on a row-by-row bases as well.
i just add some process. but this is just a sample. By the way, why
you said that this is not a good idea using loop? i interested to know
Performance wise, If you can avoid looping through a result set executing some other DMLs inside a loop, do it.
There is PL/SQL engine and there is SQL engine. Every time PL/SQL engine stumbles upon a SQL statement, whether it's a select, insert, or any other DML statement, it has to send it to the SQL engine for the execution. It calls context switching. Placing DML statement inside a loop will cause the switch(for each DML statement if there are more than one of them) as many times as many times the body of a loop has to be executed. It can be a cause of a serious performance degradation. if you have to loop, say, through a collection, use foreach loop, it minimizes context switching by executing DML statements in batches.
Luckily, your code can be rewritten as a single SQL statement, avoiding for loop entirely:
insert into tableOther(obj_id,loc_id,name_test,gender)
select tu.objId
, tl.locid
, concat(a.field3, 'test')
, a.field4
from table a
join tableUser tu
on (a.field1 = tu.name)
join tableLoc tl
on (tu.field2 = tl.loc)
You can put that insert statement into a procedure, if you want. PL/SQL will have to sent this SQL statement to the SQL engine anyway, but it will only be one call.
You can use a variable declared using a cursor rowtype. Something like this:
declare
cursor my_cursor is
select * from table;
reg my_cursor%rowtype;
begin
for reg in my_cursor loop
--
processMe(reg.field1, reg.field2, reg.field3 || "test", reg.field4);
--
end loop;
end;

Get count of ref cursor in Oracle

I have a procedure which returns ref cursor as output parameter. I need to find a way to get the count of no.of records in the cursor. Currently I have count fetched by repeating the same select query which is hindering the performance.
ex:
create or replace package temp
TYPE metacur IS REF CURSOR;
PROCEDURE prcSumm (
pStartDate IN DATE,
pEndDate IN DATE,
pKey IN NUMBER,
pCursor OUT metacur
) ;
package body temp is
procedure prcSumm(
pStartDate IN DATE,
pEndDate IN DATE,
pKey IN NUMBER,
pCursor OUT metacur
)
IS
vCount NUMBER;
BEGIN
vCount := 0;
select count(*) into vCount
from customer c, program p, custprog cp
where c.custno = cp.custno
and cp.programid = p.programid
and p.programid = pKey
and c.lastupdate >= pStartDate
and c.lastupdate < pEndDate;
OPEN pCursor for SELECT
c.custno, p.programid, c.fname, c.lname, c.address1, c.address2, cp.plan
from customer c, program p, custprog cp
where c.custno = cp.custno
and cp.programid = p.programid
and p.programid = pKey
and c.lastupdate >= pStartDate
and c.lastupdate < pEndDate;
end prcSumm;
Is there a way to get the no.of rows in the out cursor into vCount.
Thanks!
Oracle does not, in general, know how many rows will be fetched from a cursor until the last fetch finds no more rows to return. Since Oracle doesn't know how many rows will be returned, you can't either without fetching all the rows (as you're doing here when you re-run the query).
Unless you are using a single-user system or you are using a non-default transaction isolation level (which would introduce additional complications), there is no guarantee that the number of rows that your cursor will return and the count(*) the second query returns would match. It is entirely possible that another session committed a change between the time that you opened the cursor and the time that you ran the count(*).
If you are really determined to produce an accurate count, you could add a cnt column defined as count(*) over () to the query you're using to open the cursor. Every row in the cursor would then have a column cnt which would tell you the total number of rows that will be returned. Oracle has to do more work to generate the cnt but it's less work than running the same query twice.
Architecturally, though, it doesn't make sense to return a result and a count from the same piece of code. Determining the count is something that the caller should be responsible for since the caller has to be able to iterate through the results. Every caller should be able to handle the obvious boundary cases (i.e. the query returns 0 rows) without needing a separate count. And every caller should be able to iterate through the results without needing to know how many results there will be. Every single time I've seen someone try to follow the pattern of returning a cursor and a count, the correct answer has been to redesign the procedure and fix whatever error on the caller prompted the design.

Oracle pipelined function cannot access remote table (ORA-12840) when used in a union

I have created a pipelined function which returns a table. I use this function like a dynamic view in another function, in a with clause, to mark certain records. I then use the results from this query in an aggregate query, based on various criteria. What I want to do is union all these aggregations together (as they all use the same source data, but show aggregations at different heirarchical levels).
When I produce the data for individual levels, it works fine. However, when I try to combine them, I get an ORA-12840 error: cannot access a remote table after parallel/insert direct load txn.
(I should note that my function and queries are looking at tables on a remote server, via a DB link).
Any ideas what's going on here?
Here's an idea of the code:
function getMatches(criteria in varchar2) return myTableType pipelined;
...where this function basically executes some dynamic SQL, which references remote tables, as a reference cursor and spits out the results.
Then the factored queries go something like:
with marked as (
select id from table(getMatches('OK'))
),
fullStats as (
select mainTable.id,
avg(nvl2(marked.id, 1, 0)) isMarked,
sum(mainTable.val) total
from mainTable
left join marked
on marked.id = mainTable.id
group by mainTable.id
)
The reason for the first factor is speed -- if I inline it, in the join, the query goes really slowly -- but either way, it doesn't alter the status of whatever's causing the exception.
Then, say for a complete overview, I would do:
select sum(total) grandTotal
from fullStats
...or for an overview by isMarked:
select sum(total) grandTotal
from fullStats
where isMarked = 1
These work fine individually (my pseudocode maybe wrong or overly simplistic, but you get the idea), but as soon as I union all them together, I get the ORA-12840 error :(
EDIT By request, here is an obfuscated version of my function:
function getMatches(
search in varchar2)
return idTable pipelined
as
idRegex varchar2(20) := '(05|10|20|32)\d{3}';
searchSQL varchar2(32767);
type rc is ref cursor;
cCluster rc;
rCluster idTrinity;
BAD_CLUSTER exception;
begin
if regexp_like(search, '^L\d{3}$') then
searchSQL := 'select distinct null id1, id2_link id2, id3_link id3 from anotherSchema.linkTable#my.remote.link where id2 = ''' || search || '''';
elsif regexp_like(search, '^' || idRegex || '(,' || idRegex || || ')*$') then
searchSQL := 'select distinct null id1, id2, id3 from anotherSchema.idTable#my.remote.link where id2 in (' || regexp_replace(search, '(\d{5})', '''\1''') || ')';
else
raise BAD_CLUSTER;
end if;
open cCluster for searchSQL;
loop
fetch cCluster into rCluster;
exit when cCluster%NOTFOUND;
pipe row(rCluster);
end loop;
close cCluster;
return;
exception
when BAD_CLUSTER then
raise_application_error(-20000, 'Invalid Cluster Search');
return;
when others then
raise_application_error(-20999, 'API' || sqlcode || chr(10) || sqlerrm);
return;
end getMatches;
It's very simple, designed for an API with limited access to the database, in terms of sophistication (hence passing a comma delimited string as a possible valid argument): If you supply a grouping code, it returns linked IDs (it's a composite, 3-field key); however, if you supply a custom list of codes, it just returns those instead.
I'm on Oracle 10gR2; not sure which version exactly, but I can look it up when I'm back in the office :P
To be honest no idea where the issue came from but the simplest way to solve it - create a temporary table and populate it by values from your pipelined function and use the table inside WITH clause. Surely the temp table should be created but I'm pretty sure you get serious performance shift because dynamic sampling isn't applied to pipelined functions without tricks.
p.s. the issue could be fixed by with marked as ( select /*+ INLINE / id from table(getMatches('OK'))) but surely it isn't the stuff you're looking for so my suggestion is confirmed WITH does something like 'insert /+ APPEND*/' inside it'.

Resources