how to generate explain plan for entire stored procedure - oracle

I usually generate explain plans using the following in sqlplus:
SET AUTOTRACE ON
SET TIMING ON
SET TRIMSPOOL ON
SET LINES 200
SPOOL filename.txt
SET AUTOTRACE TRACEONLY;
{query goes here}
SPOOL OFF
SET AUTOTRACE OFF
But what If I want to generate explain plan for a stored procedure?
Is there a way to generate explain plan for the entire stored procedure? The SP has no input/output parameters.

What you are generating is correctly called an "execution plan". "Explain plan" is a command used to generate and view an execution plan, much as AUTOTRACE TRACEONLY does in your example.
By definition, an execution plan is for a single SQL statement. A PL/SQL block does not have an execution plan. If it contains one or more SQL statements, then each of those will have an execution plan.
One option is to manually extract the SQL statements from the PL/SQL code and use the process you've already shown.
Another option is to active SQL tracing then run the procedure. This will produce a trace file on the server that contains the execution plans for all statements executed in the session. The trace is in fairly raw form so it is generally easiest to format it using Oracle's TKPROF tool; there are also various third-party tools that process these trace files as well.

Hi I have done like below for the stored procedure:
SET AUTOTRACE ON
SET TIMING ON
SET TRIMSPOOL ON
SET LINES 200
SPOOL filename.txt
SET AUTOTRACE TRACEONLY;
#your stored procedure path
SPOOL OFF
SET AUTOTRACE OFF
And got the below statistics:
Statistics
-----------------------------------------------------------
6 CPU used by this session
8 CPU used when call started
53 DB time
6 Requests to/from client
188416 cell physical IO interconnect bytes
237 consistent gets
112 consistent gets - examination
237 consistent gets from cache
110 consistent gets from cache (fastpath)
2043 db block gets
1 db block gets direct
2042 db block gets from cache
567 db block gets from cache (fastpath)
27 enqueue releases
27 enqueue requests
4 messages sent
31 non-idle wait count
19 non-idle wait time
44 opened cursors cumulative
2 opened cursors current
22 physical read total IO requests
180224 physical read total bytes
1 physical write total IO requests
8192 physical write total bytes
1 pinned cursors current
461 recursive calls
4 recursive cpu usage
2280 session logical reads
1572864 session pga memory
19 user I/O wait time
9 user calls
1 user commits
No Errors.
Autotrace Disabled

Related

What are the equivalent columns of tkprof and v$statname in the autotrace of the SQL Developer?

When I get the TKPROF output, I can see the parse, execute, fetch, disk, etc.
call count cpu elapsed disk query current rows
---- ------- ------- --------- -------- -------- ------- ------
Parse 1 0.16 0.29 3 13 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.03 0.26 2 2 4 14
Misses in library cache during parse: 1
But getting the TKPROF is not as fast as getting the autotrace from the SQL Developer.
So, are there any equivalent columns corresponding to these columns in TKPROF output? It can be the execution plan output our in the V$STATNAME area below. If yes, which ones are they? Because when I check, I see a couple different parses. And I don't see anything like fetch in the v$statname.
And if there are the equivalents, will the values of TKPROF and AUTOTRACE be equal or different?
Thanks in advance.
there are many different ways to get execution plans and plan performance information in Oracle - and though they use similar information from the wait interface and the internal instrumentation it is not always easy to get an exact match for the numbers from the different tools. Usually this is not a big issue, since all the results provide a similar picture. Just to clarify some points:
tkprof is a tool to render a trace file generated by a SQL trace (which can be created in different ways): so you have to create the trace file and then get the rendering; and this may be more complicated than the use of other build-in strategies. On the plus-side SQL trace provides resource and timing information for all the detail steps in the execution.
autotrace uses internal statistics from the wait interface, but you have to consider the effects of fetch size and data transport to get the same information, your application access would create. With autotrace you get only an information about the reosurce usage and timing for the complete operation.
if you have the necessary licence, you can use the SQL Monitor to get very detailed information about the execution steps and their impact on the performance of the complete operation.
and finally you can create an execution plan with rowsource statistics by the use of a hint (gather_plan_statistics) or a corresponding session parameter (statistics_level). To gather this kind of plan you have to call dbms_xplan.display_cursor with a fitting format option.
Chris Saxon gives a useful overview for these options in https://blogs.oracle.com/sql/how-to-create-an-execution-plan.

Oracle database performance issue

We are trying to pull data from an oracle database but seem to be getting very low performance.
We have a table of around 10M rows and we have an index via which we are pulling around 1.3k rows {select * from tab where indexed_field = 'value'} (in a simplified form).
SQuirreL reports the query taking "execution: 0.182s, building output: 28.921s". The returned data occupies something like 340kB (eg, when copied/pasted into a text file).
Sometimes the building output phase takes much longer (>5 minutes), particularly the first time a query is run. Repeating it seems to run much faster - eg the 29s value above. Is this likely to just be the result of a transient overload on the database, of might it be due to buffering the repeat data?
Is a second per 50 rows (13kB) a reasonable figure or is this unexpectedly large? (This is unlikely to be a network issue.)
Is it possible that the dbms if failing to leverage the fact that the data could be grouped physically (by having the physical order the same as the index order) and is doing a separate disk read per row, and if so how can it be persuaded to be more efficient?
There isn't much odd about the data - 22 columns per row, mostly defined as varchar2(250) though usually containing a few tens of chars. I'm not sure how big the ironware running Oracle is, but it lives in a datacentre so probably not too puny.
Any thoughts gratefully received.
kfinity> Have you tried setting your fetch size larger, like 500 or so?
That's the one! Speeds it up by an order of magnitude. 1.3k rows in 2.5s, 9.5k rows in 19s. Thanks for that suggestion.
BTW, doing select 1 only provides a speedup of about 10%, which I guess suggests that disk access wasn't the bottleneck.
others>
The fetch plan is:
Operation Options Object Mode Cost Bytes Cardinality
0 SELECT STATEMENT ALL_ROWS 6 17544 86
1 TABLE ACCESS BY INDEX ROWID BATCHED TAB ANALYZED 6 17544 86
2 INDEX RANGE SCAN TAB_IDX ANALYZED 3 86
which, with my limited understanding, looks OK.
The "sho parameter" things didn't work (SQL errors), apart from the select which gave:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production
I guess the only outstanding question is "what's the downside of setting the fetch size to a large value?". Given that we will always end up reading the entire result set (unless there is an exception) my guess would be "not much". Is that right?
Anyway, many thanks to those who responded and a big thanks for the solution.
1.3k rows on a table of 10M rows for oracle is not too big.
The reason why second results are faster than first results is that oracle load data in RAM on the fisrt query and just read it from RAM on the second.
Are you sure that the index is well used ? Maybe you can do an explain plan and show us the result ?
Few immediate actions to be taken are:
Rebuild the index on table.
Gather the stats on table.
execute following before rerun the query to extract execution plan.
sql> set autotrace traceonly enable ;
turn this off by:
sql> set autotrace off ;
Also,provide result of following :
sql> sho parameter SGA
sql> sho parameter cursor
sql> select banner from v$version;
Abhi

Reading large result sets in jdbc

I am using JDBC and have a Stored Procedure in DB2 which returns a huge amount of data - around 6000 rows. Because of this huge volume, network transfer (from db server to the application server) is taking time.
What I am thinking is to use multiple java threads to invoke the Stored Procedure with each thread returning different blocks of data.
Thread1 - row 1 - row 1000;
Thread2 - row 1001 - row 2000;
Thread3 - row 2001 - row 3000 and so on
All these threads can be run in parallel and I can aggregate the results of each thread.
Is there any better way to handle this problem using JDBC or any other means?
Depending upon your JDBC driver, setting the fetch size may help. With the default fetch size of 0, your driver may be reading all rows into memory at once which could be the cause of the slowness.

Oracle Data Pump Export (expdp) locks table (or something similar)

I must export data from a partitioned table with global index that must be online all the time, but I am having troubles in doing that.
For data export I am using Data Pump Export - expdp and I am exporting only one partition. The oldest one, not the active one.
My expdp command exports correct data and it looks like this:
expdp user/pass#SID DIRECTORY=EXP_DIR
DUMPFILE=part23.dmp TABLES=SCHEMA_NAME.TABLE_NAME:TABLE_PARTITION_23`
Application that uses database has a connection timeout of 10 seconds. This parameter can't be changed. If INSERT queries are not finished within 10 seconds, data is written to a backup file.
My problem is that, during the export process that lasts few minutes, some data ends up in the backup file, and not in the database. I want to know why, and avoid it.
Partitions are organized weekly, and I am keeping 4 partitions active (last 4 weeks). Every partition is up to 3 GB.
I am using Oracle 11.2
Are you licensed to use the AWR? If so, do you have an AWR report for the snapshot when the timeouts occurred?
Oracle readers don't block writers and there would be no reason for an export process to lock anything that would impact new inserts.
Is this a single INSERT operation that has a timeout of 10 seconds (i.e. you are inserting a large number of rows in a single INSERT statement)? Or is this a batch of individual inserts such that some of the inserts can succeed in the 10 second window and some can fail? You say that "some data ends up in the backup file" but I'm not sure which of these scenarios are more accurate.
During normal operations, how close are you to the 10 second time-out?
Is it possible that the system is I/O bound and that doing the export increases the load on the I/O system causing all operations to be slower? If you've got an I/O bottleneck and you add an export process that has to read a 3 GB partition and write that data to disk (presumably also on the database server), that could certainly cause a general slowdown. If you're reasonably close to the 10 second time-out already, that could certainly push you over the edge.

How to know the alive time for a explicit cursor?

After how much time oracle itself closes an explicitly defined cursor?
Oracle won't close your cursor unless you explicitely ask for it. You can open a cursor on an inactive table, wait for 24 hours then fetch rows from the cursor.
On active tables (tables that may be updated/deleted/inserted), you may run into ORA-1555 after a while (the table has been modified and the information to reconstruct old versions of the blocks has been overwritten). If your UNDO tablespace is set as AUTOEXTEND, you can safely fetch from any cursor opened less than UNDO_RETENTION seconds ago:
SQL> show parameter undo_retention
NAME TYPE VALUE
------------------------------------ ----------- ------
undo_retention integer 900
On my DB I can safely fetch from cursors for 900 seconds (15 mins). This is a low threshold (Oracle will keep sufficient data to reconstruct old versions of blocks for at least 15 minutes).

Resources