how to debug tsql stored procedure? - visual-studio-2010

How do I debug a tsql Stored procedure. I have tried the following link.
http://msdn.microsoft.com/en-us/library/ms241871(v=vs.80).aspx
But I am unable to hit the break point. Is there a better way to debug. My environment is
Sql Express 2008, Visual Studio 2010

I have found the debugger in SQL Managment studio unreliable as it's so dependant on having the correct permissions on the db server which are not always available.
One alternate method I use is to convert the stored proc into a long query. I start by moving any parameteres to variable declarations and set their values. For examples the following
ALTER PROCEDURE [dbo].[USP_ConvertFinancials] (#EffectiveDate datetime, #UpdatedBy nvarchar(100))
AS
BEGIN
DECLARE #PreviousBusinessDay datetime
would become
DECLARE #Value int,
, #EffectiveDate datetime = '01-Jan-2011
, #UpdatedBy nvarchar(100) = 'System'
This allows me to run the queries within the stored procedure starting from the top. As I move down through the queries, I can check the values of variables by simply selecting them and rerunning the query from the top:
SELECT #Value
I can also comment out the INSERT portion of INSERT-SELECT statements to see what is being inserted into tables and table variables.
The bug in the stored proc usually becomes quite evident using this method. Once I get the query running correctly I can simply copy the code to my proc and recompile.
Good luck!

You can try out Sql Profiler, it does not allows a classical debugging like "break at this point" but gives you an information in great detail about what is going on on each step of a query/SP execution.
Unfortunately Microsoft does not provide it with Express Edition version of Sql Server.
BUT :) There is a good (relatively because it does not provide a lot of filtering criterias which exists in Microsoft's one) and free alternative - SQL Server 2005/2008 Express Profiler.

Debug a stored procedure.
check the logic whether it makes sense or not.
use break point to help find issues.
try to do the modular design as per complex process.
divide the task into multiple simple ones.
use a master stored procedure to take control on the top, and use several child stored procedures to do the job step by step.
As per the optimization, use execution plan, SS Profiler and DTA tools.

Related

Is static sql to be preferred over dynamic sql in postgresql stored procedures?

I am not sure in case of Stored Procedures, if Postgresql treats static sql any differently from a query submitted as a quoted string.
When I create a stored procedure in PostgreSQL using static sql, there seems to be no validation of the table names and table columns or column types but when I run the procedure I get the listing of the problems if any.
open ref_cursor_variable for
select usr_name from usres_master;
-- This is a typing mistake. The table name should be users_master. But the stored procedure is created and the error is thrown only when I run the procedure.
When I run the procedure I (naturally) get some error like :
table usres_master - invalid table name
The above is a trivial version. The real procedures we use at work combine several tables and run to at least a few hundred lines. In PostgresQL stored procedure, is there no advantage to using static sql over dynamic sql i.e. something like open ref_cursor_variable for EXECUTE select_query_string_variable.
The static SQL should be preferred almost time - dynamic SQL should be used only when it is necessary
from performance reasons (dynamic SQL doesn't reuse execution plans). One shot plan can be better some times (and necessary).
can reduce lot of code
In other cases uses static SQL every time. Benefits:
readability
reuse of execution plans
it is safe against SQL injection by default
static check is available
The source of a function is just a string to Postgres. The main reason for this is the fact that Postgres (unlike other DBMS) supports many, even installable languages for functions and procedures. As the Postgres core can't possibly know the syntax of all languages, it can not validate the "inner" part of a function. To my knowledge the "language API" does not contain any "validate" method (in theory this would probably be possible though).
If you want to statically validate your PL/pgSQL functions (and procedures since Postgres 11) you could use e.g. https://github.com/okbob/plpgsql_check/

Using `SELECT` to call a function

I occasionally encounter examples where SELECT...INTO...FROM DUAL is used to call a function - e.g.:
SELECT some_function INTO a_variable FROM DUAL;
is used, instead of
a_variable := some_function;
My take on this is that it's not good practice because A) it makes it unclear that a function is being invoked, and B) it's inefficient in that it forces a transition from the PL/SQL engine to the SQL engine (perhaps less of an issue today).
Can anyone explain why this might have been done, e.g. was this necessary in early PL/SQL coding in order to invoke a function? The code I'm looking at may date from as early as Oracle 8.
Any insights appreciated.
This practice dates from before PLSQL and Oracle 7. As already mentioned assignment was possible (and of course Best Practice) in Oracle7.
Before Oracle 7 there were two widely used Tools that needed the use of Select ... into var from dual;
On the one hand there used to be an Oracle Tool called RPT, some kind of report generator. RPT could be used to create batch processes. It had two kinds of macros, that could be combined to achieve what we use PLSQL for today. My first Oracle job involved debugging PLSQL that was generated by a program that took RPT batches and converted them automatically to PLSQL. I threw away my only RPT handbook sometime shortly after 2000.
On the other hand there was Oracle Forms 2.x and its Menu component. Context switching in Oracle Menu was often done with a Select ... from dual; I still remember how proud I was when I discovered that an untractable Bug was caused by a total of 6 records in table Dual.
I am sorry to say that I can not proof any of this, but it is the time of year to think back to the old times and really fun to have the answer.

PLSQL "give" script but not allow users to read

I'm new to pl/sql and my question is: is it possible to "compile" a script in sql plus or sql developer and give the file to other person in order to allow other to execute the code but not allowing them to read the code?
It sounds like you are talking about the Oracle wrap utility (a separate command-line application that is part of your Oracle client install and not a part of SQL Developer) or the dbms_ddl.wrap function which you could invoke from SQL Developer. These create obfuscated statements that will create a stored procedure (or package or function) that behaves normally but where the text in the data dictionary is not human readable. The wrap utility doesn't provide perfect security-- there are unwrapping tools and presentations on the internet that would let an attacker unwrap the code you hand them. And you can often figure out what the unwrapped code is really doing by looking at other data dictionary views (v$sql will show the unwrapped SQL statements that are executed for example) or by tracing a session.
It depends also of definition of the word give. You can store PL/SQL code in the database. Give users right to execute it, to see the source code of the package header, but not to see the source of package body. But of course DBAs can read it, They can also trace it (even if it is wrapped).
Also note that PL/SQL packages are wrapped in a different way than PL/SQL procedures. As of 11g packages are wrapped using simple one-to-one byte substitution. While for PL/SQL procedures, there is stored obfuscated bytecode of DIANA virtual machine. AFAIK there is no accessible unwrap for PL/SQL procedures, it is much harder to reverse engineer.

Same stored procedure acts differently on two/(three) different IDEs

I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.

How do I log/trace Oracle stored procedure calls with parameter values?

We're looking for a way to log any call to stored procedures in Oracle, and see what parameter values were used for the call.
We're using Oracle 10.2.0.1
We can log SQL statements and see the bound variables, but when we track stored procedures we see bind variables B1, B2, etc. but no values.
We'd like to see the same kind of information we've seen in MS SQL Server Profiler.
Thanks for any help
You could take a look at the DBMS_APPLICATION_INFO package. This allows you to "instrument" your PL/SQL code with whatever information you want - but it does entail adding calls to each procedure to be instrumented.
See also this AskTom thread on using DBMS_APPLICATION_INFO to monitor PL/SQL.
I think you are using the word "log" in a strange manner.
We can log SQL Statements...
Do you really mean to say you can TRACE sql statements with bind variables? Tony's answer is directed to the ability to LOG what you are doing. This is always superior to tracing because only you know what is important to you. Perhaps the execution of your process depends heavily on querying a value from a table. Since that value changes and it's not passed in as a parameter, you could lose that information.
But if you actually LOG what you are doing, you can include that value in your Log table and you'll know not only the variables you passed in but that key value as well.
alter system set events '10046 trace name context forever, level 12'; Is that what you were using?
Yes, I think I should have used the term 'trace'
I'll try to describe what we've done:
Using the enterprise manager (as dbo) we've gone to a session, and started a trace
start trace
Enable wait info, bind info
Run an operation on our application that hits the DB
Finish the trace, run this on the output:
tkprof .prc output2.txt sys=no record=record.txt explain=dbo#DBINST/PW
What we're wanting to see is, "these procedures were called with these parameters" What we're getting is:
Begin dbo.UPKG_PACKAGENAME.PROC(:v0, :v1, :v2 ...); End;
/
Begin dbo.UPKG_PACKAGENAME.PROC2(:v0, :v1, :v2 ...); End;
/
...
So we can trace the procedures that were called, but we don't get the actual parameter values, just the :v0, etc.
My understanding is that what we've done is the same as the alter system statement, but please let us know if that's not the case.
Thanks
are you using 10g
let try with this
exec dbms_monitor.session_trace_enable(session_id=>xxx, serial_num=>xx, waits=>true, binds=>true);
you can get session_id=SID & serial_num=SERIAL# from v$session

Resources