First off, my background is in SQL Server. Using CTEs (Common Table Expressions) is a breeze and converting it to a stored procedure with variables doesn't require any changes to the structure of the SQL other than replacing entered values with variable names.
In Oracle PL/SQL however, it is a completely different matter. My CTEs work fine as straight SQL, but once I try to wrap them as PL/SQL I run into a host of issues. From my understanding, a SELECT now needs an INTO which will only hold the results of a single record. However, I am wanting the entire recordset of multiple values.
My apologies if I am missing the obvious here. I'm thinking that 99% of my problem is the paradigm shift I need to make.
Given the following example:
NOTE: I am greatly over simplifying the SQL here. I do know the below example can be done in a single SQL statement. The actual SQL is much more complex. It's the fundamentals I am looking for here.
WITH A as (SELECT * FROM EMPLOYEES WHERE DEPARTMENT = 200),
B as (SELECT * FROM A WHERE EMPLOYEE_START_DATE > date '2014-02-01'),
C as (SELECT * FROM B WHERE EMPLOYEE_TYPE = 'SALARY')
SELECT 'COUNTS' as Total,
(SELECT COUNT(*) FROM A) as 'DEPT_TOTAL',
(SELECT COUNT(*) FROM B) as 'NEW_EMPLOYEES',
(SELECT COUNT(*) FROM C) as 'NEW_SALARIED'
FROM A
WHERE rowcount = 1;
Now if I want to make this into PL/SQL with variables that are passed in or predefined at the top, it's not a simple matter of declaring the variables, popping values into them, and changing my hard-coded values into variables and running it. NOTE: I do know that I can simply change the hard-coded values to variables like :Department, :StartDate, and :Type, but again, I am oversimplifying the example.
There are three issues I am facing here that I am trying to wrap my head around:
1) What would be the best way to rewrite this using PL/SQL with declared variables? The CTEs now have to go INTO something. But then I am dealing with one row at a time as opposed to the entire table. So CTE 'A' is a single row at a time, and CTE B will only see the single row as opposed to all of the data results of A, etc. I do know that I will most likely have to use CURSORS to traverse the records, which somehow seems to over complicate this.
2) The output now has to use DBMS_OUTPUT. For multiple records, I will have to use a CURSOR with FETCH (or a FOR...LOOP). Yes?
3) Is there going to a big performance issue with this vs. straight SQL in regards to speed and resources used?
Thanks in advance and again, my apologies if I am missing something really obvious here!
First, this has nothing to do with CTEs. This behavior would be the same with a simple select * from table query. The difference is that with T-SQL, the query goes into an implicit cursor which is returned to the caller. When executing the SP from Management Studio this is convenient. The result set appears in the data window as if we had executed the query directly. But this is actually non-standard behavior. Oracle has the more standard behavior which might be stated as "the result set of any query that isn't directed into a cursor must be directed to variables." When directed into variables, then the query must return only one row.
To duplicate the behavior of T-SQL, you just have to explicitly declare and return the cursor. Then the calling code fetches from the cursor the entire result set but one row at a time. You don't get the convenience of Sql Developer or PL/SQL Developer diverting the result set to the data display window, but you can't have everything.
However, as we don't generally write SPs just to be called from the IDE, it is easier to work with Oracle's explicit cursors than SQL Server's implicit ones. Just google "oracle return ref cursor to caller" to get a whole lot of good material.
Simplest way is to wrap it into an implicit for loop
begin
for i in (select object_id, object_name
from user_objects
where rownum = 1) loop
-- Do something with the resultset
dbms_output.put_line (i.object_id || ' ' || i.object_name);
end loop;
end;
Single row query without the need to predefine the variables.
Related
I have a performance problem.
First PL/SQL (most time never ends and OS database process is always over 90%):
DECLARE
myId nvarchar2(10) := '0;WF21izb0';
BEGIN
insert into MY_TABLE (select * from MY_VIEW where ID = myId);
END;
Second PL/SQL (ends with successfull result in 50s):
BEGIN
insert into MY_TABLE (select * from MY_VIEW where ID = '0;WF21izb0');
END;
select count(*) from MY_VIEW
is also a not ending call, there are a lot of table joins behind this view.
select count(*) from MY_VIEW where ID = '0;WF21izb0'
ends in 50s with count=60000.
Can somebody explain me the reason why my first PL/SQL is not finishing after 50s? What is the difference between using static string and declared parameter?
It boils down to what the DB engine knows about your data and your query, when preparing the query execution plan.
When a literal is placed in your query, it is a part of your query, so is known to the engine responsible for preparing the plan. It can take that literal value into account and decide on an execution plan, that is suitable, e.g. based on the DB data statistics (e.g. that this value is rare).
When you are using a PL/SQL variable, the actual query, for which the plan is determined, is different. It's something like:
insert into MY_TABLE (select * from MY_VIEW where ID = :param)
As you can see, the DB engine has now no information on the value, which will be used, when the query gets executed. So the best plan for such a scenario, is to prepare something, which is averagely good for most of the probable values (i.e. see what values in the DB will match this place most often, i.e. values that are prevalent).
If your data is unbalanced, and the '0;WF21izb0' value is rare (or even non-existent) in your data, a selective index may be used to narrow down, what needs to be processed, relatively soon in the critical parts of the execution plan. This plan will however backfire, when you'll use a value, which is all over the place - use of the index will be counter-productive. A better plan for such case may be a full table scan. Possibly the same one, which is used when executing select count(*) from MY_VIEW.
If you are faced with a scenario, where you do not know the filtering value upfront, you'll have to analyze the view code, and try to adjust it so it can be effectively used also for less "selective" values. You could try applying some optimizer hints to the query. You could also resign from using a view, and try your luck with a tabular function, where you can push your filtering predicates to the exact spots of the query, where they can be used most effectively.
Edit:
All in all, follow the advises from the question comments, and examine your execution plans and execution profile data. You should be able to find the culprit. From there it may not be obvious, what the solution is, but still, you know your data and relations much better than us.
I was checking some traces, but after reading the comment of APC and the answer of Hilarion i end up in this solution:
declare
sql_stmt VARCHAR2(200);
id VARCHAR2(10) := '0;WF21izb0';
BEGIN
sql_stmt := 'insert into MY_TABLE (select * from MY_VIEW where ID = :1)';
EXECUTE IMMEDIATE sql_stmt using id;
END;
This is done in 50s, and id can be now a function/procedure parameter.
Thanks for the comments.
First, I want to make it clear that the question is not about the materialized views feature.
Suppose, I have a table function that returns a pre-defined set of columns.
When a function call is submitted as
SELECT col1, col2, col3
FROM TABLE(my_tfn(:p1))
WHERE col4 = 'X';
I can evaluate the parameter and choose what queries to execute.
I can either open one of the pre-defined cursors, or I can assemble my query dynamically.
What if instead of evaluating the parameter I want to evaluate the text of the requesting query?
For example, if my function returns 20 columns but the query is only requesting 4,
I can assign NULLs to remaining 16 clumns of the return type, and execute fewer joins.
Or I can push the filter down to my dynamic query.
Is there a way to make this happen?
More generally, is there a way to look at the requesting query before exuting the function?
There is no robust way to identify the SQL that called a PL/SQL object.
Below is a not-so-robust way to identify the calling SQL. I've used code like this before, but only in special circumstances where I knew that the PL/SQL would never run concurrently.
This seems like it should be so simple. The data dictionary tracks all sessions and running SQL. You can find the current session with sys_context('userenv', 'sid'), match that to GV$SESSION, and then get either SQL_ID and PREV_SQL_ID. But neither of those contain the calling SQL. There's even a CURRENT_SQL in SYS_CONTEXT, but it's only for fine-grained auditing.
Instead, the calling SQL must be found by a string search. Using a unique name for the PL/SQL object will help filter out unrelated statements. To prevent re-running for old statements, the SQL must be individually purged from the shared pool as soon as it is found. This could lead to race conditions so this approach will only work if it's never called concurrently.
--Create simple test type for function.
create or replace type clob_table is table of clob;
--Table function that returns the SQL that called it.
--This requires elevated privileges to run.
--To simplify the code, run this as SYS:
-- "grant execute on sys.dbms_shared_pool to your_user;"
--(If you don't want to do that, convert this to invoker's rights and use dynamic SQL.)
create or replace function my_tfn return clob_table is
v_my_type clob_table;
type string_table is table of varchar2(4000);
v_addresses string_table;
v_hash_values string_table;
begin
--Get calling SQL based on the SQL text.
select sql_fulltext, address, hash_value
bulk collect into v_my_type, v_addresses, v_hash_values
from gv$sql
--Make sure there is something unique in the query.
where sql_fulltext like '%my_tfn%'
--But don't include this query!
--(Normally creating a quine is a challenge, but in V$SQL it's more of
-- a challenge to avoid quines.)
and sql_fulltext not like '%quine%';
--Flush the SQL statements immediately, so they won't show up in next run.
for i in 1 .. v_addresses.count loop
sys.dbms_shared_pool.purge(v_addresses(i)||', '||v_hash_values(i), 'C');
end loop;
--Return the SQL statement(s).
return v_my_type;
end;
/
Now queries like these will return themselves, demonstrating that the PL/SQL code was reading the SQL that called it:
SELECT * FROM TABLE(my_tfn) where 1=1;
SELECT * FROM TABLE(my_tfn) where 2=2;
But even if you go through all this trouble - what are you going to do with the results? Parsing SQL is insanely difficult unless you can ensure that everyone always follows strict syntax rules.
I am writing a function similar to Tom Kyte's print_table, but with more output options and better formatting, etc., etc. It struck me that writing a function where you could enter arbitrary SQL with authid currentuser -- rather than passing a ref cursor -- was pretty dangerous! However, I need to know the column attributes and, AFAIK, there's no other way of getting this.
Is there a way, therefore, of restricting the dbms_sql.execute function so it only operates on select queries? Or, put another way, is there a way to check the type of DML being parsed through the cursor and then, for example, raise an exception if it's anything other than a select?
Textual analysis of the query (e.g., allowing select or with, but nothing else) wouldn't work, because you could just do print_table('select * from dual; drop someTable'); etc...
(P.S., Oracle 10gR2, if it makes a difference.)
Scenario: we have flashback set up on certain tables in a Oracle database. Every now and then, we want to see what fields changed from one row to another. We can inspect visually of course but that is error-prone.
So I had the "brilliant" idea to try to step through the rows, store the current record into one record variable, and the prior record into another one. Then, field-by-field, compare each field, and if different, print out the field name and the values. Something like this:
DECLARE CURSOR myflash IS SELECT * FROM myflashtable;
OLDRECORD myflashtable%ROWTYPE;
NEWRECORD myflashtable%ROWTYPE;
dynamic_statement varchar2(4000);
cursor colnames is select * from all_tab_columns where table_name = 'myflashtable';
begin
if not myflash%ISOPEN then
open myflash;
end if;
fetch myflash into NEWRECORD;
while myflash%FOUND loop;
for columnnames in colnames loop
/* cobble together dynamic SQL along the lines of
"if oldrecord.column_name != newrecord.column_name
then print some information``....end if;"
*/
execute immediate dynamic_statement;
end loop;
OLDRECORD := NEWRECORD;
fetch myflash into NEWRECORD;
end loop;
end;
Naturally this didn't work. Initially it gave me "invalid SQL statement" and I added begin/end onto the dynamic SQL. When I tried running that version, it gave me an error because it doesn't know about the old/new records. When I run without doing the execute, but just dumping the generated SQL, it is stepping through all the columns on each of the records, so that part of the logic is working.
I'm quite sure there's a better way to do this, or perhaps to make it work. One thought was to do something like declaring old/new value variables, then using dynamic SQL to move the old/new record fields to each of those:
EXECUTE IMMEDIATE 'oldvalue := OLDRECORD.'||columnnames.column_name;
EXECUTE IMMEDIATE 'newvalue := NEWRECORD.'||columnnames.column_name;
IF oldvalue != newvalue then
/* print some stuff */
END IF:
but of course the trick is that the target variable would have to handle columns of a bunch of different types - char, date, etc. So there'd need to be variants of old/newvalue variables, and logic to handle that, and it was turning into not-so-much-fun.
Any suggestions for a more elegant way to do this? I've checked around the site and haven't had much like finding anything that quite seemed like what I'm trying to do.
You are on the right track. But it is quite some more programming work to do. Read the old and new table in a join linking it with the correct primary key and loop through it. You can use DMBS_SQL package to build a dynamic cursor and loop through the tables.
I have many PL/SQL functions and procedures that execute dynamic sql.
Is it possible to extract the parsed statements and dbms_output as an debugging aid ?
What I really want is to see the parsed sql (sql statement with substituted parameters).
Example:
I have a dynamic SQL statement like this
SQ:='SELECT :pComno as COMNO,null t$CPLS,t$CUNO,t$cpgs,t$stdt,t$tdat,t$qanp,t$disc,:cS Source FROM BAAN.TTDSLS031'||PCOMNO --1
|| ' WHERE' ||' TRIM(T$CUNO)=trim(:CUNO)' --2
|| ' AND TRIM(T$CPGS)=trim(:CPGS)' --3
|| ' AND T$QANP = priceWorx.fnDefaultQanp ' --4
|| ' AND priceWorx.fdG2J(sysdate) between priceWorx.fdG2J(t$stdt) and priceWorx.fdG2J(t$tdat)' --5
|| ' AND rownum=1 order by t$stdt';--6
execute immediate SQ into R using
PCOMNO,'C' --1
,PCUNO-- 2
,PCPGS;-- 3
What will be the statement sent to the server ?
You can display the bind variables associated with a SQL statement like this:
select v$sql.sql_text
,v$sql_bind_capture.*
from v$sql_bind_capture
inner join v$sql on
v$sql_bind_capture.hash_value = v$sql.hash_value
and v$sql_bind_capture.child_address = v$sql.child_address
--Some unique string from your query
where lower(sql_text) like lower('%priceWorx.fdG2J(sysdate)%');
You probably would like to see the entire query, with all the bind variables replaced by their actual values. Unfortunately, there's no easy way to get exactly what you're looking for, because of the following
issues.
V$SQL_BIND_CAPTURE doesn't store all of the bind variable information. The biggest limitation is that it only displays data "when the bind variable is used in the WHERE or HAVING clauses of the SQL statement."
Matching the bind variable names from the bind capture data to the query is incredibly difficult. It's easy to get it working 99% of the time, but that last 1% requires a SQL and PL/SQL parser, which is basically impossible.
SQL will age out of the pool. For example, if you gather stats on one of the relevant tables, it may invalidate all queries that use that table. You can't always trust V$SQL to have your query.
Which means you're probably stuck doing it the ugly way. You need to manually store the SQL and the bind variable data, similar to what user1138658 is doing.
You can do this with the dbms_output package. You can enable and disable the debug, and get the lines with get_line procedure.
I tested with execute immediate, inserting in a table and it works.
I recently answered another question with a example of using this.
One possible solution of this is to create a table temp(id varchar2,data clob); in your schema and then put the insert statement wherever you want to find the parsed key
insert into temp values(seq.nextval,v_text);
For example
declare
v_text varchar2(2000);
begin
v_text:='select * from emp'; -- your dynamic statement
insert into temp values(seq.nextval,v_text); --insert this script whenever you want to find the actual query
OPEN C_CUR FOR v_text;
-----
end;
Now if you see the table temp, you'll get the data for that dynamic statement.