I have many PL/SQL functions and procedures that execute dynamic sql.
Is it possible to extract the parsed statements and dbms_output as an debugging aid ?
What I really want is to see the parsed sql (sql statement with substituted parameters).
Example:
I have a dynamic SQL statement like this
SQ:='SELECT :pComno as COMNO,null t$CPLS,t$CUNO,t$cpgs,t$stdt,t$tdat,t$qanp,t$disc,:cS Source FROM BAAN.TTDSLS031'||PCOMNO --1
|| ' WHERE' ||' TRIM(T$CUNO)=trim(:CUNO)' --2
|| ' AND TRIM(T$CPGS)=trim(:CPGS)' --3
|| ' AND T$QANP = priceWorx.fnDefaultQanp ' --4
|| ' AND priceWorx.fdG2J(sysdate) between priceWorx.fdG2J(t$stdt) and priceWorx.fdG2J(t$tdat)' --5
|| ' AND rownum=1 order by t$stdt';--6
execute immediate SQ into R using
PCOMNO,'C' --1
,PCUNO-- 2
,PCPGS;-- 3
What will be the statement sent to the server ?
You can display the bind variables associated with a SQL statement like this:
select v$sql.sql_text
,v$sql_bind_capture.*
from v$sql_bind_capture
inner join v$sql on
v$sql_bind_capture.hash_value = v$sql.hash_value
and v$sql_bind_capture.child_address = v$sql.child_address
--Some unique string from your query
where lower(sql_text) like lower('%priceWorx.fdG2J(sysdate)%');
You probably would like to see the entire query, with all the bind variables replaced by their actual values. Unfortunately, there's no easy way to get exactly what you're looking for, because of the following
issues.
V$SQL_BIND_CAPTURE doesn't store all of the bind variable information. The biggest limitation is that it only displays data "when the bind variable is used in the WHERE or HAVING clauses of the SQL statement."
Matching the bind variable names from the bind capture data to the query is incredibly difficult. It's easy to get it working 99% of the time, but that last 1% requires a SQL and PL/SQL parser, which is basically impossible.
SQL will age out of the pool. For example, if you gather stats on one of the relevant tables, it may invalidate all queries that use that table. You can't always trust V$SQL to have your query.
Which means you're probably stuck doing it the ugly way. You need to manually store the SQL and the bind variable data, similar to what user1138658 is doing.
You can do this with the dbms_output package. You can enable and disable the debug, and get the lines with get_line procedure.
I tested with execute immediate, inserting in a table and it works.
I recently answered another question with a example of using this.
One possible solution of this is to create a table temp(id varchar2,data clob); in your schema and then put the insert statement wherever you want to find the parsed key
insert into temp values(seq.nextval,v_text);
For example
declare
v_text varchar2(2000);
begin
v_text:='select * from emp'; -- your dynamic statement
insert into temp values(seq.nextval,v_text); --insert this script whenever you want to find the actual query
OPEN C_CUR FOR v_text;
-----
end;
Now if you see the table temp, you'll get the data for that dynamic statement.
Related
First, I want to make it clear that the question is not about the materialized views feature.
Suppose, I have a table function that returns a pre-defined set of columns.
When a function call is submitted as
SELECT col1, col2, col3
FROM TABLE(my_tfn(:p1))
WHERE col4 = 'X';
I can evaluate the parameter and choose what queries to execute.
I can either open one of the pre-defined cursors, or I can assemble my query dynamically.
What if instead of evaluating the parameter I want to evaluate the text of the requesting query?
For example, if my function returns 20 columns but the query is only requesting 4,
I can assign NULLs to remaining 16 clumns of the return type, and execute fewer joins.
Or I can push the filter down to my dynamic query.
Is there a way to make this happen?
More generally, is there a way to look at the requesting query before exuting the function?
There is no robust way to identify the SQL that called a PL/SQL object.
Below is a not-so-robust way to identify the calling SQL. I've used code like this before, but only in special circumstances where I knew that the PL/SQL would never run concurrently.
This seems like it should be so simple. The data dictionary tracks all sessions and running SQL. You can find the current session with sys_context('userenv', 'sid'), match that to GV$SESSION, and then get either SQL_ID and PREV_SQL_ID. But neither of those contain the calling SQL. There's even a CURRENT_SQL in SYS_CONTEXT, but it's only for fine-grained auditing.
Instead, the calling SQL must be found by a string search. Using a unique name for the PL/SQL object will help filter out unrelated statements. To prevent re-running for old statements, the SQL must be individually purged from the shared pool as soon as it is found. This could lead to race conditions so this approach will only work if it's never called concurrently.
--Create simple test type for function.
create or replace type clob_table is table of clob;
--Table function that returns the SQL that called it.
--This requires elevated privileges to run.
--To simplify the code, run this as SYS:
-- "grant execute on sys.dbms_shared_pool to your_user;"
--(If you don't want to do that, convert this to invoker's rights and use dynamic SQL.)
create or replace function my_tfn return clob_table is
v_my_type clob_table;
type string_table is table of varchar2(4000);
v_addresses string_table;
v_hash_values string_table;
begin
--Get calling SQL based on the SQL text.
select sql_fulltext, address, hash_value
bulk collect into v_my_type, v_addresses, v_hash_values
from gv$sql
--Make sure there is something unique in the query.
where sql_fulltext like '%my_tfn%'
--But don't include this query!
--(Normally creating a quine is a challenge, but in V$SQL it's more of
-- a challenge to avoid quines.)
and sql_fulltext not like '%quine%';
--Flush the SQL statements immediately, so they won't show up in next run.
for i in 1 .. v_addresses.count loop
sys.dbms_shared_pool.purge(v_addresses(i)||', '||v_hash_values(i), 'C');
end loop;
--Return the SQL statement(s).
return v_my_type;
end;
/
Now queries like these will return themselves, demonstrating that the PL/SQL code was reading the SQL that called it:
SELECT * FROM TABLE(my_tfn) where 1=1;
SELECT * FROM TABLE(my_tfn) where 2=2;
But even if you go through all this trouble - what are you going to do with the results? Parsing SQL is insanely difficult unless you can ensure that everyone always follows strict syntax rules.
First off, my background is in SQL Server. Using CTEs (Common Table Expressions) is a breeze and converting it to a stored procedure with variables doesn't require any changes to the structure of the SQL other than replacing entered values with variable names.
In Oracle PL/SQL however, it is a completely different matter. My CTEs work fine as straight SQL, but once I try to wrap them as PL/SQL I run into a host of issues. From my understanding, a SELECT now needs an INTO which will only hold the results of a single record. However, I am wanting the entire recordset of multiple values.
My apologies if I am missing the obvious here. I'm thinking that 99% of my problem is the paradigm shift I need to make.
Given the following example:
NOTE: I am greatly over simplifying the SQL here. I do know the below example can be done in a single SQL statement. The actual SQL is much more complex. It's the fundamentals I am looking for here.
WITH A as (SELECT * FROM EMPLOYEES WHERE DEPARTMENT = 200),
B as (SELECT * FROM A WHERE EMPLOYEE_START_DATE > date '2014-02-01'),
C as (SELECT * FROM B WHERE EMPLOYEE_TYPE = 'SALARY')
SELECT 'COUNTS' as Total,
(SELECT COUNT(*) FROM A) as 'DEPT_TOTAL',
(SELECT COUNT(*) FROM B) as 'NEW_EMPLOYEES',
(SELECT COUNT(*) FROM C) as 'NEW_SALARIED'
FROM A
WHERE rowcount = 1;
Now if I want to make this into PL/SQL with variables that are passed in or predefined at the top, it's not a simple matter of declaring the variables, popping values into them, and changing my hard-coded values into variables and running it. NOTE: I do know that I can simply change the hard-coded values to variables like :Department, :StartDate, and :Type, but again, I am oversimplifying the example.
There are three issues I am facing here that I am trying to wrap my head around:
1) What would be the best way to rewrite this using PL/SQL with declared variables? The CTEs now have to go INTO something. But then I am dealing with one row at a time as opposed to the entire table. So CTE 'A' is a single row at a time, and CTE B will only see the single row as opposed to all of the data results of A, etc. I do know that I will most likely have to use CURSORS to traverse the records, which somehow seems to over complicate this.
2) The output now has to use DBMS_OUTPUT. For multiple records, I will have to use a CURSOR with FETCH (or a FOR...LOOP). Yes?
3) Is there going to a big performance issue with this vs. straight SQL in regards to speed and resources used?
Thanks in advance and again, my apologies if I am missing something really obvious here!
First, this has nothing to do with CTEs. This behavior would be the same with a simple select * from table query. The difference is that with T-SQL, the query goes into an implicit cursor which is returned to the caller. When executing the SP from Management Studio this is convenient. The result set appears in the data window as if we had executed the query directly. But this is actually non-standard behavior. Oracle has the more standard behavior which might be stated as "the result set of any query that isn't directed into a cursor must be directed to variables." When directed into variables, then the query must return only one row.
To duplicate the behavior of T-SQL, you just have to explicitly declare and return the cursor. Then the calling code fetches from the cursor the entire result set but one row at a time. You don't get the convenience of Sql Developer or PL/SQL Developer diverting the result set to the data display window, but you can't have everything.
However, as we don't generally write SPs just to be called from the IDE, it is easier to work with Oracle's explicit cursors than SQL Server's implicit ones. Just google "oracle return ref cursor to caller" to get a whole lot of good material.
Simplest way is to wrap it into an implicit for loop
begin
for i in (select object_id, object_name
from user_objects
where rownum = 1) loop
-- Do something with the resultset
dbms_output.put_line (i.object_id || ' ' || i.object_name);
end loop;
end;
Single row query without the need to predefine the variables.
Here's a piece of Oracle code I'm trying to adapt. I've abbreviated all the details:
declare
begin
loop
--do stuff to populate a global temporary table. I'll call it 'TempTable'
end loop;
end;
/
Select * from TempTable
Right now, this query runs fine provided I run it in two steps. First I run the program at the top, then I run the select * to get the results.
Is it possible to combine the two pieces so that I can populate the global temp table and retrieve the results all in one step?
Thanks in advance!
Well, for me it depends on how I would see the steps. You are doing a PL/SQL and SQL command. I would rather type in those into a file, and run them in one command (if that could called as a single step for you)...
Something like
file.sql
begin
loop
--do stuff to populate a global temporary table. I'll call it 'TempTable'
end loop;
end;
/
Select *
from TempTable
/
And run it as:
prompt> sqlplus /#db #file.sql
If you give us more details like how you populate the GTT, perhaps we might find a way to do it in a single step.
Yes, but it's not trivial.
create global temporary table my_gtt
( ... )
on commit preserve rows;
create or replace type my_gtt_rowtype as object
( [columns definition] )
/
create or replace type my_gtt_tabtype as table of my_gtt_rowtype
/
create or replace function pipe_rows_from_gtt
return my_gtt_tabtype
pipelined
is
pragma autonomous_transaction;
type rc_type is refcursor;
my_rc rc_type;
my_output_rec my_gtt_rectype := my_gtt_rectype ([nulls for each attribute]);
begin
delete from my_gtt;
insert into my_gtt ...
commit;
open my_rc for select * from my_gtt;
loop
fetch my_rc into my_output_rec.attribute1, my_output_rec.attribute1, etc;
exit when my_rc%notfound;
pipe_row (my_output_rec);
end loop;
close my_rc;
return;
end;
/
I don't know it the autonomous transaction pragma is required - but I suspect it is, otherwise it'll throw errors about functions performing DML.
We use code like this to have reporting engines which can't perform procedural logic build the global temporary tables they use (and reuse) in various subreports.
In oracle, an extra table to store intermediate results is very seldom needed. It might help to make things easier to understand. When you are able to write SQL to fill the intermediate table, you can certainly query the rows in a single step without having to waste time by filling a GTT. If you are using pl/sql to populate the GTT, see if this can be corrected to be pure SQL. That will almost certainly give you a performance benefit.
I have a messy stored procedure which uses dynamic sql.
I can debug it in runtime by adding print #sql; where #sql; is the string containing the dynamic SQL, right before I call execute (#sql);.
Now, the multi-page stored procedure also creates dynamic tables and uses them in a query. I want to print those tables to the console right before I do an execute, so that I know exactly what the query is trying to do.
However, the SQL Server 08 does not like that. When I try:
print #temp_table; and try to compile the S.P. I get this error:
The name "#temp_table" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.
Please help.
EDIT:
I am a noob when it comes to SQL. However, the following statement: select * from #tbl; does not print anything to the console when it is run non-interactively; the print statement works though.
The following statement has incorrect syntax: print select * from #tbl;. Is there a way for me to redirect the output of select to a file, if stdout is not an option?
Thanks.
When we use dynamic SQl we start by having a debug input variable in the sp (make it the last one and give it a default value of 0 (to indicate not in debug mode that way it won't break existing code calling the proc).
Now when you run it in debug mode, you print instead of execute or you print and execute but you always rollback at the end. If you need to see the data at various stages, the best thing to do is the second. Then before you rollback you put the data you want to see into a a table varaiable (This is important it can't be a temp table). after the rollback, select from the table variable (which did not go out of scope with the rollback) and run your print tstatments to see the queries which were run.
Debugging this way the only way you'll get output is
select * from #temp_table;
Alternatively, look into the debugging features built into SQL Server Management Studio. For example, this web page may help you
SQL Server Performance . com
You can print Variables, but not tables. You can, however, SELECT from the #table.
Now, if the table is created, filled up and modified in a single statement that is executed, then you can view the state of the table as it was before being modified, but the data will have changed since.
of course, as soon as the dynamic sql finishes, the #table is no longer available so you're stuck!
To counter that, you can insert into a ##Table (note the double hash marks) in your dynamic SQL along with the #table and then query that ##table at the end of execution of the dynamic sql.
for as much as I hate cursors, give this a try:
SET NOCOUNT ON
CREATE TABLE #TempTable1
(ColumnInt int
,ColumnVarchar varchar(50)
,ColumnDatetime datetime
)
INSERT INTO #TempTable1 VALUES (1,'A',GETDATE())
INSERT INTO #TempTable1 VALUES (12345,'abcdefghijklmnop','1/1/2010')
INSERT INTO #TempTable1 VALUES (null,null,null)
INSERT INTO #TempTable1 VALUES (445454,null,getdate())
SET NOCOUNT OFF
DECLARE #F_ColumnInt int
,#F_ColumnVarchar varchar(50)
,#F_ColumnDatetime datetime
DECLARE CursorTempTable1 CURSOR FOR
SELECT
ColumnInt, ColumnVarchar, ColumnDatetime
FROM #TempTable1
ORDER BY ColumnInt
FOR READ ONLY
--populate and allocate resources to the cursor
OPEN CursorTempTable1
PRINT '#TempTable1 contents:'
PRINT ' '+REPLICATE('-',20)
+' '+REPLICATE('-',50)
+' '+REPLICATE('-',23)
--process each row
WHILE 1=1
BEGIN
FETCH NEXT FROM CursorTempTable1
INTO #F_ColumnInt, #F_ColumnVarchar, #F_ColumnDatetime
--finished fetching all rows?
IF ##FETCH_STATUS <> 0
BEGIN --YES, all done fetching
--exith the loop
BREAK
END --IF finished fetching
PRINT ' '+RIGHT( REPLICATE(' ',20) + COALESCE(CONVERT(varchar(20),#F_ColumnInt),'null') ,20)
+' '+LEFT( COALESCE(#F_ColumnVarchar,'null') + REPLICATE(' ',50) ,50)
+' '+LEFT( COALESCE(CONVERT(char(23),#F_ColumnDatetime,121),'null') + REPLICATE(' ',23) ,23)
END --WHILE
--close and free the cursor's resources
CLOSE CursorTempTable1
DEALLOCATE CursorTempTable1
OUTPUT:
#TempTable1 contents:
-------------------- -------------------------------------------------- -----------------------
null null null
1 A 2010-03-18 13:28:24.260
12345 abcdefghijklmnop 2010-01-01 00:00:00.000
445454 null 2010-03-18 13:28:24.260
If I knew that your temp table had a PK, I'd give a cursor free loop example.
This question is similar to a couple others I have found on StackOverflow, but the differences are signficant enough to me to warrant a new question, so here it is:
I want to obtain a result set from dynamic SQL in Oracle and then display it as a result set in a SqlDeveloper-like tool, just as if I had executed the dynamic SQL statement directly. This is straightforward in SQL Server, so to be concrete, here is an example from SQL Server that returns a result set in SQL Server Management Studio or Query Explorer:
EXEC sp_executesql N'select * from countries'
Or more properly:
DECLARE #stmt nvarchar(100)
SET #stmt = N'select * from countries'
EXEC sp_executesql #stmt
The question "How to return a resultset / cursor from a Oracle PL/SQL anonymous block that executes Dynamic SQL?" addresses the first half of the problem--executing dynamic SQL into a cursor. The question "How to make Oracle procedure return result sets" provides a similar answer. Web search has revealed many variations of the same theme, all addressing just the first half of my question. I found this post explaining how to do it in SqlDeveloper, but that uses a bit of functionality of SqlDeveloper. I am actually using a custom query tool so I need the solution to be self-contained in the SQL code. This custom query tool similarly does not have the capability to show output of print (dbms_output.put_line) statements; it only displays result sets. Here is yet one more possible avenue using 'execute immediate...bulk collect', but this example again renders the results with a loop of dbms_output.put_line statements. This link attempts to address the topic but the question never quite got answered there either.
Assuming this is possible, I will add one more condition: I would like to do this without having to define a function or procedure (due to limited DB permissions). That is, I would like to execute a self-contained PL/SQL block containing dynamic SQL and return a result set in SqlDeveloper or a similar tool.
So to summarize:
I want to execute an arbitrary SQL statement (hence dynamic SQL).
The platform is Oracle.
The solution must be a PL/SQL block with no procedures or functions.
The output must be generated as a canonical result set; no print statements.
The output must render as a result set in SqlDeveloper without using any SqlDeveloper special functionality.
Any suggestions?
The closest thing I could think of is to create a dynamic view for which permission is required. This will certainly involve using a PL/SQL block and a SQL query and no procedure/function. But, any dynamic query can be converted and viewed from the Result Grid as it's going to be run as a select query.
DEFINE view_name = 'my_results_view';
SET FEEDBACK OFF
SET ECHO OFF
DECLARE
l_view_name VARCHAR2(40) := '&view_name';
l_query VARCHAR2(4000) := 'SELECT 1+level as id,
''TEXT''||level as text FROM DUAL ';
l_where_clause VARCHAR2(4000):=
' WHERE TRUNC(1.0) = 1 CONNECT BY LEVEL < 10';
BEGIN
EXECUTE IMMEDIATE 'CREATE OR REPLACE VIEW '
|| l_view_name
|| ' AS '
|| l_query
|| l_where_clause;
END;
/
select * from &view_name;
You seem to be asking for a chunk of PL/SQL code that will take an arbitrary query returning result set of undetermined structure and 'forward/restructure' that result set in some way such that is can easily be rendered by some "custom GUI tool".
If so, look into the DBMS_SQL for dynamic SQL. It has a DESCRIBE_COLUMNS procedure which returns the columns from a dynamic SELECT statement. The steps you would need are,
Parse the statement
Describe the result set (column names and data types)
Fetch each row, and for each column, call the datatype dependent function to return that value into a local variable
Place those local variables into a defined structure to return to the calling environment (eg consistent column names [such as col_1, col_2] probably all of VARCHAR2)
As an alternative, you could try building the query into an XMLFOREST statement, and parse the results out of the XML.
Added :
Unlike SQL Server, an Oracle PL/SQL call will not 'naturally' return a single result set. It can open up one or more ref cursors and pass them back to the client. It then becomes the client's responsibility to fetch records and columns from those ref cursors. If your client doesn't/can't deal with that, then you cannot use a PL/SQL call.
A stored function can return a pre-defined collection type, which can allow you to do something like "select * from table(func_name('select * from countries'))". However the function cannot do DML (update/delete/insert/merge) because it blows away any concept of consistency for that query. Plus the structure being returned is fixed so that
select * from table(func_name('select * from countries'))
must return the same set of columns (column names and data types) as
select * from table(func_name('select * from persons'))
It is possible, using DBMS_SQL or XMLFOREST, for such a function to take a dynamic query and restructure it into a pre-defined set of columns (col_1, col_2, etc) so that it can be returned in a consistent manner. But I can't see what the point of it would be.
Try try these.
DECLARE
TYPE EmpCurTyp IS REF CURSOR;
v_emp_cursor EmpCurTyp;
emp_record employees%ROWTYPE;
v_stmt_str VARCHAR2(200);
v_e_job employees.job%TYPE;
BEGIN
-- Dynamic SQL statement with placeholder:
v_stmt_str := 'SELECT * FROM employees WHERE job_id = :j';
-- Open cursor & specify bind argument in USING clause:
OPEN v_emp_cursor FOR v_stmt_str USING 'MANAGER';
-- Fetch rows from result set one at a time:
LOOP
FETCH v_emp_cursor INTO emp_record;
EXIT WHEN v_emp_cursor%NOTFOUND;
END LOOP;
-- Close cursor:
CLOSE v_emp_cursor;
END;
declare
v_rc sys_refcursor;
begin
v_rc := get_dept_emps(10); -- This returns an open cursor
dbms_output.put_line('Rows: '||v_rc%ROWCOUNT);
close v_rc;
end;
Find more examples here. http://forums.oracle.com/forums/thread.jspa?threadID=886365&tstart=0
In TOAD when executing the script below you will be prompted for the type of v_result. From the pick list of types select cursor, the results are then displayed in Toad's data grid (the excel spreadsheet like result). That said, when working with cursors as results you should always write two programs (the client and the server). In this case 'TOAD' will be the client.
DECLARE
v_result sys_refcursor;
v_dynamic_sql VARCHAR2 (4000);
BEGIN
v_dynamic_sql := 'SELECT * FROM user_objects where ' || ' 1 = 1';
OPEN :v_result FOR (v_dynamic_sql);
END;
There may be a similar mechanism in Oracle's SQL Developer to prompt for the binding as well.