Debugging dynamic sql + dynamic tables in MS SQL Server 2008 - debugging

I have a messy stored procedure which uses dynamic sql.
I can debug it in runtime by adding print #sql; where #sql; is the string containing the dynamic SQL, right before I call execute (#sql);.
Now, the multi-page stored procedure also creates dynamic tables and uses them in a query. I want to print those tables to the console right before I do an execute, so that I know exactly what the query is trying to do.
However, the SQL Server 08 does not like that. When I try:
print #temp_table; and try to compile the S.P. I get this error:
The name "#temp_table" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.
Please help.
EDIT:
I am a noob when it comes to SQL. However, the following statement: select * from #tbl; does not print anything to the console when it is run non-interactively; the print statement works though.
The following statement has incorrect syntax: print select * from #tbl;. Is there a way for me to redirect the output of select to a file, if stdout is not an option?
Thanks.

When we use dynamic SQl we start by having a debug input variable in the sp (make it the last one and give it a default value of 0 (to indicate not in debug mode that way it won't break existing code calling the proc).
Now when you run it in debug mode, you print instead of execute or you print and execute but you always rollback at the end. If you need to see the data at various stages, the best thing to do is the second. Then before you rollback you put the data you want to see into a a table varaiable (This is important it can't be a temp table). after the rollback, select from the table variable (which did not go out of scope with the rollback) and run your print tstatments to see the queries which were run.

Debugging this way the only way you'll get output is
select * from #temp_table;
Alternatively, look into the debugging features built into SQL Server Management Studio. For example, this web page may help you
SQL Server Performance . com

You can print Variables, but not tables. You can, however, SELECT from the #table.
Now, if the table is created, filled up and modified in a single statement that is executed, then you can view the state of the table as it was before being modified, but the data will have changed since.
of course, as soon as the dynamic sql finishes, the #table is no longer available so you're stuck!
To counter that, you can insert into a ##Table (note the double hash marks) in your dynamic SQL along with the #table and then query that ##table at the end of execution of the dynamic sql.

for as much as I hate cursors, give this a try:
SET NOCOUNT ON
CREATE TABLE #TempTable1
(ColumnInt int
,ColumnVarchar varchar(50)
,ColumnDatetime datetime
)
INSERT INTO #TempTable1 VALUES (1,'A',GETDATE())
INSERT INTO #TempTable1 VALUES (12345,'abcdefghijklmnop','1/1/2010')
INSERT INTO #TempTable1 VALUES (null,null,null)
INSERT INTO #TempTable1 VALUES (445454,null,getdate())
SET NOCOUNT OFF
DECLARE #F_ColumnInt int
,#F_ColumnVarchar varchar(50)
,#F_ColumnDatetime datetime
DECLARE CursorTempTable1 CURSOR FOR
SELECT
ColumnInt, ColumnVarchar, ColumnDatetime
FROM #TempTable1
ORDER BY ColumnInt
FOR READ ONLY
--populate and allocate resources to the cursor
OPEN CursorTempTable1
PRINT '#TempTable1 contents:'
PRINT ' '+REPLICATE('-',20)
+' '+REPLICATE('-',50)
+' '+REPLICATE('-',23)
--process each row
WHILE 1=1
BEGIN
FETCH NEXT FROM CursorTempTable1
INTO #F_ColumnInt, #F_ColumnVarchar, #F_ColumnDatetime
--finished fetching all rows?
IF ##FETCH_STATUS <> 0
BEGIN --YES, all done fetching
--exith the loop
BREAK
END --IF finished fetching
PRINT ' '+RIGHT( REPLICATE(' ',20) + COALESCE(CONVERT(varchar(20),#F_ColumnInt),'null') ,20)
+' '+LEFT( COALESCE(#F_ColumnVarchar,'null') + REPLICATE(' ',50) ,50)
+' '+LEFT( COALESCE(CONVERT(char(23),#F_ColumnDatetime,121),'null') + REPLICATE(' ',23) ,23)
END --WHILE
--close and free the cursor's resources
CLOSE CursorTempTable1
DEALLOCATE CursorTempTable1
OUTPUT:
#TempTable1 contents:
-------------------- -------------------------------------------------- -----------------------
null null null
1 A 2010-03-18 13:28:24.260
12345 abcdefghijklmnop 2010-01-01 00:00:00.000
445454 null 2010-03-18 13:28:24.260
If I knew that your temp table had a PK, I'd give a cursor free loop example.

Related

Trying to use EXECUTE IMMEDIATE, cannot compile procedure

Trying to write a procedure which takes no values in, adds a sale price column to my existing product table, then loops through to calculate a sale price and insert that into the new column.
I haven't been able to get anything to work, I think it's something to do with Oracle not liking ALTER TABLE to be run from inside a procedure, but I don't know, and I don't know enough to direct my attempts anywhere else.
This is my attempt
CREATE or REPLACE PROCEDURE ProductLineSale as
BEGIN
DECLARE
NewSalePrice NUMBER(6,2):=0;
EXECUTE IMMEDIATE 'alter table ' || Product || 'add or replace column' || 'SalePrice NUMBER(6,2);'
FOR p in (SELECT ProductStandardPrice FROM Product
group by ProductStandardPrice)
LOOP
CASE WHEN p.ProductStandardPrice>=400 THEN NewSalePrice:=.9*price
WHEN p.ProductStandardPrice<400 THEN NewSalePrice:=.85*price
INSERT INTO Product(SalePrice)
VALUES(NewSalePrice)
END LOOP;
END ProductLineSale
Product is the literal name of the Product table in my database. SalePrice is what I would like the new column to be named.
SQLDeveloper won't compile the procedure. The error I get is fairly cryptic as well:
Error(2,10): PLS-00103: Encountered the symbol "=" when expecting one of the following: constant exception table long double ref char time timestamp interval date binary national character nchar.
There are a host of errors... The ones that jump out at me on a first pass.
The requirement doesn't make sense. Adding a column in a procedure doesn't make sense. You create a procedure because you want code to be reusable. Adding a column can only be done once, hence it is by definition not reusable.
A procedure has to be compiled before it can be executed. If there is a reference to a column that doesn't exist, the procedure will fail to compile. Thus, if you want to add a column to the table using dynamic SQL, all subsequent references to the column (i.e. your insert statement) would need to use dynamic SQL as well.
Your DDL statement is incorrect. There is no add or replace clause, it's alter table product add SalePrice NUMBER(6,2). Note that when you're building your string, you also have to ensure that there is a space between the clause add and the column name SalesPrice-- one of the two strings you're concatenating together would need that.
It doesn't make sense to have a declare where you do. You can declare variables between the as and the begin one line above. You are allowed to create a nested PL/SQL block there with the declare but then you'd need a matching begin and end that you don't have.
If you're going to use a case statement in PL/SQL, you'd need an end case. You would also need to have a semicolon ; after each expression.
Your insert statement is also missing a semicolon.
Logically, I am hard pressed to imagine that you really want have an insert here. It doesn't make logical sense to create a bunch of new rows in the table when you add a new column. I would assume that you want to update the value of the new column in existing rows. Which, presumably, requires that your cursor selects the primary key column(s) and potentially changes whether and what you're grouping by.
Product and price are being used as local variables in the execute immediate statement and in the case statement but aren't defined. I'm guessing that you just want to hard code the name of the table you're altering and that price is supposed to reference the name of a column in the table that you need to select in your cursor but I'm not sure.
This case statement is syntactically valid (or would be if price resolves to something valid). Many of the other corrections are less obvious because of the reasons I detailed above.
case when p.ProductStandardPrice>=400
then NewSalePrice:=.9*price;
when p.ProductStandardPrice<400
THEN NewSalePrice:=.85*price;
end case;
If I was to speculate at what you actually want (given that this is a homework assignment with requirements that don't actually make sense), I'd guess something like
CREATE or REPLACE PROCEDURE ProductLineSale
as
begin
execute immediate 'alter table Product add SalePrice NUMBER(6,2)';
execute immediate 'update product ' ||
' set SalePrice = (case when ProductStandardPrice >= 400 ' ||
' then 0.9 * Price ' ||
' else 0.85 * Price ' ||
' end) ';
end ProductLineSale;
If you're going to use dynamic SQL, it almost always makes sense to declare a local variable, build the SQL statement in that variable, and then execute it so that you can debug things by printing out the statement you've build to debug it.

Dynamic Query Re-write or Evaluating the Query Before Executing Table Function

First, I want to make it clear that the question is not about the materialized views feature.
Suppose, I have a table function that returns a pre-defined set of columns.
When a function call is submitted as
SELECT col1, col2, col3
FROM TABLE(my_tfn(:p1))
WHERE col4 = 'X';
I can evaluate the parameter and choose what queries to execute.
I can either open one of the pre-defined cursors, or I can assemble my query dynamically.
What if instead of evaluating the parameter I want to evaluate the text of the requesting query?
For example, if my function returns 20 columns but the query is only requesting 4,
I can assign NULLs to remaining 16 clumns of the return type, and execute fewer joins.
Or I can push the filter down to my dynamic query.
Is there a way to make this happen?
More generally, is there a way to look at the requesting query before exuting the function?
There is no robust way to identify the SQL that called a PL/SQL object.
Below is a not-so-robust way to identify the calling SQL. I've used code like this before, but only in special circumstances where I knew that the PL/SQL would never run concurrently.
This seems like it should be so simple. The data dictionary tracks all sessions and running SQL. You can find the current session with sys_context('userenv', 'sid'), match that to GV$SESSION, and then get either SQL_ID and PREV_SQL_ID. But neither of those contain the calling SQL. There's even a CURRENT_SQL in SYS_CONTEXT, but it's only for fine-grained auditing.
Instead, the calling SQL must be found by a string search. Using a unique name for the PL/SQL object will help filter out unrelated statements. To prevent re-running for old statements, the SQL must be individually purged from the shared pool as soon as it is found. This could lead to race conditions so this approach will only work if it's never called concurrently.
--Create simple test type for function.
create or replace type clob_table is table of clob;
--Table function that returns the SQL that called it.
--This requires elevated privileges to run.
--To simplify the code, run this as SYS:
-- "grant execute on sys.dbms_shared_pool to your_user;"
--(If you don't want to do that, convert this to invoker's rights and use dynamic SQL.)
create or replace function my_tfn return clob_table is
v_my_type clob_table;
type string_table is table of varchar2(4000);
v_addresses string_table;
v_hash_values string_table;
begin
--Get calling SQL based on the SQL text.
select sql_fulltext, address, hash_value
bulk collect into v_my_type, v_addresses, v_hash_values
from gv$sql
--Make sure there is something unique in the query.
where sql_fulltext like '%my_tfn%'
--But don't include this query!
--(Normally creating a quine is a challenge, but in V$SQL it's more of
-- a challenge to avoid quines.)
and sql_fulltext not like '%quine%';
--Flush the SQL statements immediately, so they won't show up in next run.
for i in 1 .. v_addresses.count loop
sys.dbms_shared_pool.purge(v_addresses(i)||', '||v_hash_values(i), 'C');
end loop;
--Return the SQL statement(s).
return v_my_type;
end;
/
Now queries like these will return themselves, demonstrating that the PL/SQL code was reading the SQL that called it:
SELECT * FROM TABLE(my_tfn) where 1=1;
SELECT * FROM TABLE(my_tfn) where 2=2;
But even if you go through all this trouble - what are you going to do with the results? Parsing SQL is insanely difficult unless you can ensure that everyone always follows strict syntax rules.

ORACLE: Using CTEs (Common Table Expressions) with PL/SQL

First off, my background is in SQL Server. Using CTEs (Common Table Expressions) is a breeze and converting it to a stored procedure with variables doesn't require any changes to the structure of the SQL other than replacing entered values with variable names.
In Oracle PL/SQL however, it is a completely different matter. My CTEs work fine as straight SQL, but once I try to wrap them as PL/SQL I run into a host of issues. From my understanding, a SELECT now needs an INTO which will only hold the results of a single record. However, I am wanting the entire recordset of multiple values.
My apologies if I am missing the obvious here. I'm thinking that 99% of my problem is the paradigm shift I need to make.
Given the following example:
NOTE: I am greatly over simplifying the SQL here. I do know the below example can be done in a single SQL statement. The actual SQL is much more complex. It's the fundamentals I am looking for here.
WITH A as (SELECT * FROM EMPLOYEES WHERE DEPARTMENT = 200),
B as (SELECT * FROM A WHERE EMPLOYEE_START_DATE > date '2014-02-01'),
C as (SELECT * FROM B WHERE EMPLOYEE_TYPE = 'SALARY')
SELECT 'COUNTS' as Total,
(SELECT COUNT(*) FROM A) as 'DEPT_TOTAL',
(SELECT COUNT(*) FROM B) as 'NEW_EMPLOYEES',
(SELECT COUNT(*) FROM C) as 'NEW_SALARIED'
FROM A
WHERE rowcount = 1;
Now if I want to make this into PL/SQL with variables that are passed in or predefined at the top, it's not a simple matter of declaring the variables, popping values into them, and changing my hard-coded values into variables and running it. NOTE: I do know that I can simply change the hard-coded values to variables like :Department, :StartDate, and :Type, but again, I am oversimplifying the example.
There are three issues I am facing here that I am trying to wrap my head around:
1) What would be the best way to rewrite this using PL/SQL with declared variables? The CTEs now have to go INTO something. But then I am dealing with one row at a time as opposed to the entire table. So CTE 'A' is a single row at a time, and CTE B will only see the single row as opposed to all of the data results of A, etc. I do know that I will most likely have to use CURSORS to traverse the records, which somehow seems to over complicate this.
2) The output now has to use DBMS_OUTPUT. For multiple records, I will have to use a CURSOR with FETCH (or a FOR...LOOP). Yes?
3) Is there going to a big performance issue with this vs. straight SQL in regards to speed and resources used?
Thanks in advance and again, my apologies if I am missing something really obvious here!
First, this has nothing to do with CTEs. This behavior would be the same with a simple select * from table query. The difference is that with T-SQL, the query goes into an implicit cursor which is returned to the caller. When executing the SP from Management Studio this is convenient. The result set appears in the data window as if we had executed the query directly. But this is actually non-standard behavior. Oracle has the more standard behavior which might be stated as "the result set of any query that isn't directed into a cursor must be directed to variables." When directed into variables, then the query must return only one row.
To duplicate the behavior of T-SQL, you just have to explicitly declare and return the cursor. Then the calling code fetches from the cursor the entire result set but one row at a time. You don't get the convenience of Sql Developer or PL/SQL Developer diverting the result set to the data display window, but you can't have everything.
However, as we don't generally write SPs just to be called from the IDE, it is easier to work with Oracle's explicit cursors than SQL Server's implicit ones. Just google "oracle return ref cursor to caller" to get a whole lot of good material.
Simplest way is to wrap it into an implicit for loop
begin
for i in (select object_id, object_name
from user_objects
where rownum = 1) loop
-- Do something with the resultset
dbms_output.put_line (i.object_id || ' ' || i.object_name);
end loop;
end;
Single row query without the need to predefine the variables.

How to retrieve parsed dynamic pl Sql

I have many PL/SQL functions and procedures that execute dynamic sql.
Is it possible to extract the parsed statements and dbms_output as an debugging aid ?
What I really want is to see the parsed sql (sql statement with substituted parameters).
Example:
I have a dynamic SQL statement like this
SQ:='SELECT :pComno as COMNO,null t$CPLS,t$CUNO,t$cpgs,t$stdt,t$tdat,t$qanp,t$disc,:cS Source FROM BAAN.TTDSLS031'||PCOMNO --1
|| ' WHERE' ||' TRIM(T$CUNO)=trim(:CUNO)' --2
|| ' AND TRIM(T$CPGS)=trim(:CPGS)' --3
|| ' AND T$QANP = priceWorx.fnDefaultQanp ' --4
|| ' AND priceWorx.fdG2J(sysdate) between priceWorx.fdG2J(t$stdt) and priceWorx.fdG2J(t$tdat)' --5
|| ' AND rownum=1 order by t$stdt';--6
execute immediate SQ into R using
PCOMNO,'C' --1
,PCUNO-- 2
,PCPGS;-- 3
What will be the statement sent to the server ?
You can display the bind variables associated with a SQL statement like this:
select v$sql.sql_text
,v$sql_bind_capture.*
from v$sql_bind_capture
inner join v$sql on
v$sql_bind_capture.hash_value = v$sql.hash_value
and v$sql_bind_capture.child_address = v$sql.child_address
--Some unique string from your query
where lower(sql_text) like lower('%priceWorx.fdG2J(sysdate)%');
You probably would like to see the entire query, with all the bind variables replaced by their actual values. Unfortunately, there's no easy way to get exactly what you're looking for, because of the following
issues.
V$SQL_BIND_CAPTURE doesn't store all of the bind variable information. The biggest limitation is that it only displays data "when the bind variable is used in the WHERE or HAVING clauses of the SQL statement."
Matching the bind variable names from the bind capture data to the query is incredibly difficult. It's easy to get it working 99% of the time, but that last 1% requires a SQL and PL/SQL parser, which is basically impossible.
SQL will age out of the pool. For example, if you gather stats on one of the relevant tables, it may invalidate all queries that use that table. You can't always trust V$SQL to have your query.
Which means you're probably stuck doing it the ugly way. You need to manually store the SQL and the bind variable data, similar to what user1138658 is doing.
You can do this with the dbms_output package. You can enable and disable the debug, and get the lines with get_line procedure.
I tested with execute immediate, inserting in a table and it works.
I recently answered another question with a example of using this.
One possible solution of this is to create a table temp(id varchar2,data clob); in your schema and then put the insert statement wherever you want to find the parsed key
insert into temp values(seq.nextval,v_text);
For example
declare
v_text varchar2(2000);
begin
v_text:='select * from emp'; -- your dynamic statement
insert into temp values(seq.nextval,v_text); --insert this script whenever you want to find the actual query
OPEN C_CUR FOR v_text;
-----
end;
Now if you see the table temp, you'll get the data for that dynamic statement.

Is it possible to inspect the contents of a Table Value Parameter via the debugger?

Does anyone know if it is possible to use the Visual Studio / SQL Server Management Studio debugger to inspect the contents of a Table Value Parameter passed to a stored procedure?
To give a trivial example:
CREATE TYPE [dbo].[ControllerId] AS TABLE(
[id] [nvarchar](max) NOT NULL
)
GO
CREATE PROCEDURE [dbo].[test]
#controllerData [dbo].[ControllerId] READONLY
AS
BEGIN
SELECT COUNT(*) FROM #controllerData;
END
DECLARE #SampleData as [dbo].[ControllerId];
INSERT INTO #SampleData ([id]) VALUES ('test'), ('test2');
exec [dbo].[test] #SampleData;
Using the above with a break point on the exec statement, I am able to step into the stored procedure without any trouble. The debugger shows that the #controllerData local has a value of '(table)' but I have not found any tool that would allow me to actual view the rows that make up that table.
Since you get no joy from the debugger, here is my suggestion. You add an input varaiable to determine if it is in test mode or not. Then if it is in testmode, run the select at the top of the sp to see what the data is.
CREATE TYPE [dbo].[ControllerId] AS TABLE(
[id] [nvarchar](max) NOT NULL
)
GO
CREATE PROCEDURE [dbo].[jjtest]
(#controllerData [dbo].[ControllerId] READONLY
, #test bit = null)
AS
IF #test = 1
BEGIN
SELECT * FROM #controllerData
END
BEGIN
SELECT COUNT(*) FROM #controllerData;
END
GO
DECLARE #SampleData as [dbo].[ControllerId];
INSERT INTO #SampleData ([id]) VALUES ('test'), ('test2');
EXEC [dbo].[jjtest] #SampleData, 1;
I got no success trying to do the same what you described. So I guess it isn't possible yet. Will wait for SSMS 2010
It is not possible for table variables, but I built a procedure which will display the content of a temp table from another database connection. (which is not possible with normal queries).
Note that it uses DBCC PAGE & the default trace to access the data so only use it for debugging purposes.
You can use it by putting a breakpoint in your code, opening a second connection and calling:
exec sp_select 'tempdb..#mytable'
There is a solution I think you can create another stored procedure and fill the table value parameter and the call your main procedure and then you start debugging from the test procedure you made.

Resources