Same stored procedure acts differently on two/(three) different IDEs - jdbc

I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.

This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.

Related

OBIEE: Procedure executed via Direct Query analysis is run twice. Why

In OBIEE I have created a direct query that runs stored procedure.
Procedure itself makes insert into a table and at the end another insert into log table.
As mentioned in the tile, from logs I can tell that procedure is executed twice. Is this a normal behavior for direct query in OBIEE?
You have to trace what happens in the logs of both OBI and the database. It's definitely not "normal" as you put it, but as you don't specify which version you are using so it's impossible to say whether you're hitting a bug. By the way, an exact version is "12.2.1.4.200414" and not "12c".
In general, Direct Database Request are at best an exception case for OBIEE as it's an analytical platform and not a SQL interpreter.

Oracle: Return Large Dataset with Cursor in Procedure

I've seen lots of posts regarding the use of cursors in PL/SQL to return data to a calling application, but none of them touch on the issue I believe I'm having with this technique. I am fairly new to Oracle, but have extensive experience with MSSQL Server. In SQL Server, when building queries to be called by an application for returning data, I usually put the SELECT statement inside a stored proc with/without parameters, and let the stored proc execute the statement(s) and return the data automatically. I've learned that with PL/SQL, you must store the resulting dataset in a cursor and then consume the cursor.
We have a query that doesn't necessarily return huge amounts of rows (~5K - 10K rows), however the dataset is very wide as it's composed of 1400+ columns. Running the SQL query itself in SQL Developer returns results instantaneously. However, calling a procedure that opens a cursor for the same query takes 5+ minutes to finish.
CREATE OR REPLACE PROCEDURE PROCNAME(RESULTS OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULTS FOR
<SELECT_query_with_1400+_columns>
...
END;
After doing some debugging to try to get to the root cause of the slowness, I'm leaning towards the cursor returning one row at a time very slowly. I can actually see this real-time by converting the proc code into a PL/SQL block and using DBMS_SQL.return_result(RESULTS) after the SELECT query. When running this, I can see each row show up in the Script output window in SQL Developer one at a time. If this is exactly how the cursor returns the data to the calling application, then I can definitely see how this is the bottleneck as it could take 5-10 minutes to finish returning all 5K-10K rows. If I remove columns from the SELECT query, the cursor displays all the rows much faster, so it does seem like the large amount of columns is an issue using a cursor.
Knowing that running the SQL query by itself returns instant results, how could I get this same performance out of a cursor? It doesn't seem like it's possible. Is the answer putting the embedded SQL in the application code and not using a procedure/cursor to return data in this scenario? We are using Oracle 12c in our environment.
Edit: Just want to address how I am testing performance using the regular SELECT query vs the PL/SQL block with cursor method:
SELECT (takes ~27 seconds to return ~6K rows):
SELECT <1400+_columns>
FROM <table_name>;
PL/SQL with cursor (takes ~5-10 minutes to return ~6K rows):
DECLARE RESULTS SYS_REFCURSOR;
BEGIN
OPEN RESULTS FOR
SELECT <1400+_columns>
FROM <table_name>;
DBMS_SQL.return_result(RESULTS);
END;
Some of the comments are referencing what happens in the console application once all the data is returned, but I am only speaking regarding the performance of the two methods described above within Oracle\SQL Developer. Hope this helps clarify the point I'm trying to convey.
You can run a SQL Monitor report for the two executions of the SQL; that will show you exactly where the time is being spent. I would also consider running the two approaches in separate snapshot intervals and checking into the output from an AWR Differences report and ADDM Compare Report; you'd probably be surprised at the amazing detail these comparison reports provide.
Also, even though > 255 columns in a table is a "no-no" according to Oracle as it will fragment your record across > 1 database blocks, thus increasing the IO time needed to retrieve the results, I suspect the differences in the two approaches that you are seeing is not an IO problem since in straight SQL you report fast result fetching all. Therefore, I suspect more of a memory problem. As you probably know, PL/SQL code will use the Program Global Area (PGA), so I would check the parameter pga_aggregate_target and bump it up to say 5 GB (just guessing). An ADDM report run for the interval when the code ran will tell you if the advisor recommends a change to that parameter.

PL/SQL - retrieve output

Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125

Running SQL within same session as my SQLDeveloper debugger

This seems an obvious requirement/use-case to me, but I haven't found anything online.
I'm debugging a PL/SQL Stored Proc which stores data in pseudo-temporary tables along the way (they are just regular tables, whose content is wiped at the end of the transaction). I'd like to inspect these values as I go. However, there seems to be no way to run arbitrary SQL within the same session that is debugging the stored proc. If I try select * from temp_..., I get no rows back and I can see that I have more than one connection open to the database.
Is there a way to do this?
I doubt there is a way to do exactly what you asked. How about either a) committing the rows you're working with, querying them from another session during debug, then truncating your table when finished or b) add a default false debug parameter to your stored proc, then output what you wish when it is set true.
There is a setting in PL/SQL Developer, called Session Mode.
Go to Tools->Preferences->Connection->Session Mode, then select 'Single session'.
When you switch to single session mode your pl/sql procedure and other connections on behalf of one user will reside within single session. So you will be able to execute sql during debug and inspect all the data you want to.

OracleDatareader seems to execute an update statement

I am using oracle client 11.2.0
Dll version 4.112.3.0
We have a page in our application where people can give a sql statement and retreive results. basically do an oracle command.executereader
Recently one of my team members gave an update statement as a test and it actually performed an update on a record!!!!
Anyone who has encountered this?
Regards
Sid.
It is a normal (albeit a bit unsettling) behavior. ExecuteReader is expected to execute the sql command provided as CommandText and build a DbDataReader that you use to loop over the results.
If the command doesn't return any row to read is not something that the reader should prevent in any case. And so it is not expected that it checks if your command is really a SELECT statement.
Think for example if you pass a stored procedure name or if you have multiple sql batch to execute. (INSERT followed by a SELECT)
I think that the biggest problem here is the fact that you allow an arbitrary sql command typed by your users to reach the database engine. A very big hole in security. You should, at least, execute some analysis on the query text before submitting the code to the database engine.
I agree with Steve. Your reader will execute any command, and might get a bit confused if it's not a select and doesn't return a result set.
To prevent people from modifying anything, create a new user, grant select only (no update, no delete, no insert) on your tables to that user (grant select on tablename to seconduser). Then, log in as seconduser, and, create synonyms for your tables (create synonym tablename for realowner.tablename). Have your application use the seconduser when connecting to the DB. This should prevent people from "hacking" your site. If you want to be of the safe side, grant no permissions but create session to the second user to prevent him from creating tables, dropping your views and similar stuff (I'd guess your executereader won't allow DDL, but test it to make sure).

Resources