This seems an obvious requirement/use-case to me, but I haven't found anything online.
I'm debugging a PL/SQL Stored Proc which stores data in pseudo-temporary tables along the way (they are just regular tables, whose content is wiped at the end of the transaction). I'd like to inspect these values as I go. However, there seems to be no way to run arbitrary SQL within the same session that is debugging the stored proc. If I try select * from temp_..., I get no rows back and I can see that I have more than one connection open to the database.
Is there a way to do this?
I doubt there is a way to do exactly what you asked. How about either a) committing the rows you're working with, querying them from another session during debug, then truncating your table when finished or b) add a default false debug parameter to your stored proc, then output what you wish when it is set true.
There is a setting in PL/SQL Developer, called Session Mode.
Go to Tools->Preferences->Connection->Session Mode, then select 'Single session'.
When you switch to single session mode your pl/sql procedure and other connections on behalf of one user will reside within single session. So you will be able to execute sql during debug and inspect all the data you want to.
Related
Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125
I am using oracle client 11.2.0
Dll version 4.112.3.0
We have a page in our application where people can give a sql statement and retreive results. basically do an oracle command.executereader
Recently one of my team members gave an update statement as a test and it actually performed an update on a record!!!!
Anyone who has encountered this?
Regards
Sid.
It is a normal (albeit a bit unsettling) behavior. ExecuteReader is expected to execute the sql command provided as CommandText and build a DbDataReader that you use to loop over the results.
If the command doesn't return any row to read is not something that the reader should prevent in any case. And so it is not expected that it checks if your command is really a SELECT statement.
Think for example if you pass a stored procedure name or if you have multiple sql batch to execute. (INSERT followed by a SELECT)
I think that the biggest problem here is the fact that you allow an arbitrary sql command typed by your users to reach the database engine. A very big hole in security. You should, at least, execute some analysis on the query text before submitting the code to the database engine.
I agree with Steve. Your reader will execute any command, and might get a bit confused if it's not a select and doesn't return a result set.
To prevent people from modifying anything, create a new user, grant select only (no update, no delete, no insert) on your tables to that user (grant select on tablename to seconduser). Then, log in as seconduser, and, create synonyms for your tables (create synonym tablename for realowner.tablename). Have your application use the seconduser when connecting to the DB. This should prevent people from "hacking" your site. If you want to be of the safe side, grant no permissions but create session to the second user to prevent him from creating tables, dropping your views and similar stuff (I'd guess your executereader won't allow DDL, but test it to make sure).
I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.
Is there any way to set the serveroutput on/off using pl/sql procedures/packages. I want to do some changes in displaying of my data on SQL*PLUS screen. like for my previous post
You cannot call SQL*Plus commands (which run only on the client) from PL/SQL (which runs only on the server).
In the particular case where you simply want to enable and disable message output, however, you can call the PL/SQL procedures dbms_output.disable and dbms_output.enable.
If you are depending on the data being written via dbms_output to be displayed to a human user, however, you are almost certainly doing something wrong. Production processes should be writing important data to some other location (i.e. a table somewhere), not writing to dbms_output and hoping that the client application happens to be configured to display the data.
We're looking for a way to log any call to stored procedures in Oracle, and see what parameter values were used for the call.
We're using Oracle 10.2.0.1
We can log SQL statements and see the bound variables, but when we track stored procedures we see bind variables B1, B2, etc. but no values.
We'd like to see the same kind of information we've seen in MS SQL Server Profiler.
Thanks for any help
You could take a look at the DBMS_APPLICATION_INFO package. This allows you to "instrument" your PL/SQL code with whatever information you want - but it does entail adding calls to each procedure to be instrumented.
See also this AskTom thread on using DBMS_APPLICATION_INFO to monitor PL/SQL.
I think you are using the word "log" in a strange manner.
We can log SQL Statements...
Do you really mean to say you can TRACE sql statements with bind variables? Tony's answer is directed to the ability to LOG what you are doing. This is always superior to tracing because only you know what is important to you. Perhaps the execution of your process depends heavily on querying a value from a table. Since that value changes and it's not passed in as a parameter, you could lose that information.
But if you actually LOG what you are doing, you can include that value in your Log table and you'll know not only the variables you passed in but that key value as well.
alter system set events '10046 trace name context forever, level 12'; Is that what you were using?
Yes, I think I should have used the term 'trace'
I'll try to describe what we've done:
Using the enterprise manager (as dbo) we've gone to a session, and started a trace
start trace
Enable wait info, bind info
Run an operation on our application that hits the DB
Finish the trace, run this on the output:
tkprof .prc output2.txt sys=no record=record.txt explain=dbo#DBINST/PW
What we're wanting to see is, "these procedures were called with these parameters" What we're getting is:
Begin dbo.UPKG_PACKAGENAME.PROC(:v0, :v1, :v2 ...); End;
/
Begin dbo.UPKG_PACKAGENAME.PROC2(:v0, :v1, :v2 ...); End;
/
...
So we can trace the procedures that were called, but we don't get the actual parameter values, just the :v0, etc.
My understanding is that what we've done is the same as the alter system statement, but please let us know if that's not the case.
Thanks
are you using 10g
let try with this
exec dbms_monitor.session_trace_enable(session_id=>xxx, serial_num=>xx, waits=>true, binds=>true);
you can get session_id=SID & serial_num=SERIAL# from v$session