I want to follow SQL transaction in Oracle. We have a software tool with 10g oracle system. The program is slowing down somewhere and I want to find that part. I could not see the event log. What do you think i can do? Is there a 3rd party software?
Thanks.
Oracle offers tracing tools. See documentation - Using Application Tracing Tools in the Performance Tuning Guide.
It is the TKPROF which is reliable and will show you exactly what is going on when you use the application, along with timings so - once you examine the result - you'll be able to do further steps and tune your code.
Sample output (just to show what I'm referring to):
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 29.60 60.68 266984 43776 131172 28144
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 29.60 60.68 266984 43776 131172 28144
^^^^^
This!
So, if you find that something takes a minute (60 seconds, right?) to complete, that might be suspicious.
Related
I am using oracle database 19c. I need to identify the historical blocking session details which had blocked almost 50 sessions. In ASH report, i can find the sid 1258 under blocking_session column under dba_hist_active_sess_history but not under sid column which is pretty unusual. I could not find any information related to activity of blocking session from hang analyze report as well. Is there any way to drill down the depth of a session other than ASH?
dba_hist_active_sess_history output
SAMPLE_ID SAMPLE TIME SID STATE EVENT SQL_ID BLK SID START SQL_EXEC_ID
---------- ----------- ----- ------------- ------------------------- --------------- ---------- ----- -----------
135345711 21 11:25:11 217 WAITING enq: TX - row lock conten shd23fhjdgjyhu 1258 <<== blocking sid 21:04 19783669
Hang analyze output:
{
p1: 'driver id'=0x54435000
p2: '#bytes'=0x1
time in wait: 2 min 16 sec
timeout after: never
wait id: 1445
blocking: 1 session
current sql_id: 3598363420
Thanks in advance !
When I get the TKPROF output, I can see the parse, execute, fetch, disk, etc.
call count cpu elapsed disk query current rows
---- ------- ------- --------- -------- -------- ------- ------
Parse 1 0.16 0.29 3 13 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.03 0.26 2 2 4 14
Misses in library cache during parse: 1
But getting the TKPROF is not as fast as getting the autotrace from the SQL Developer.
So, are there any equivalent columns corresponding to these columns in TKPROF output? It can be the execution plan output our in the V$STATNAME area below. If yes, which ones are they? Because when I check, I see a couple different parses. And I don't see anything like fetch in the v$statname.
And if there are the equivalents, will the values of TKPROF and AUTOTRACE be equal or different?
Thanks in advance.
there are many different ways to get execution plans and plan performance information in Oracle - and though they use similar information from the wait interface and the internal instrumentation it is not always easy to get an exact match for the numbers from the different tools. Usually this is not a big issue, since all the results provide a similar picture. Just to clarify some points:
tkprof is a tool to render a trace file generated by a SQL trace (which can be created in different ways): so you have to create the trace file and then get the rendering; and this may be more complicated than the use of other build-in strategies. On the plus-side SQL trace provides resource and timing information for all the detail steps in the execution.
autotrace uses internal statistics from the wait interface, but you have to consider the effects of fetch size and data transport to get the same information, your application access would create. With autotrace you get only an information about the reosurce usage and timing for the complete operation.
if you have the necessary licence, you can use the SQL Monitor to get very detailed information about the execution steps and their impact on the performance of the complete operation.
and finally you can create an execution plan with rowsource statistics by the use of a hint (gather_plan_statistics) or a corresponding session parameter (statistics_level). To gather this kind of plan you have to call dbms_xplan.display_cursor with a fitting format option.
Chris Saxon gives a useful overview for these options in https://blogs.oracle.com/sql/how-to-create-an-execution-plan.
I have queries that provide the output, but the output is really weird looking. It doesn't scroll over to look at the output, and the columns are really wide for things that are less than 20 characters. How do I get it to look normal? This is using Oracle SQL Developer. I searched online, but this doesn't seem to be what I need to do: format output archive. Although, I don't see how to get to the preference screen they are referring to either.
This is a portion of the output:
RAK SHEL SLT
---------- ---------- ------------------------------ PRT BRDBND_
---------------------------------------- ---------- DSLAM
-------------------------------------------------- VEND
-------------------------------------------------------------------------------- MODE
-------------------------------------------------------------------------------- PORT_ADDR CARRIER_ID
----------------- ----------------------------------------------------- CIRC_DE SERVICE SHELF_PT_NUM CARD_PART_NUM
---------- ---------- ------------------------- ------------------------- CARD_PT_DESC
-----------------------------------
3317 270812 1179G1 1170F1
As you can see, hopefully, it has really wide columns for the output, and column headers aren't at the top of the output window, it's in the output window. None of the output is more than 20 characters wide. I'm not sure why it's displaying that way.
Update: Plus, it's printing the column headers in the output window over and over again.
Try adding this line at the beginning of your script. It appears that your SQL Developer settings are what is making this output unreadable.
NOTE: I am using SQL Developer version 18.1.0.095
Edit: Added command to suppress reprinting of headers (pagesize) to 80 lines.
set sqlformat ansiconsole;
set pagesize 80;
I have been doing some tests and comparing the performance of Devart's dotConnect Universal to ODP.Net & Npgsql when accesing an Oracle & PostgreSQL database respectively and its pretty disapointing. In both cases, dotConnect Universal is a lot slower than the "native" providers for .NET.
For example, consider my simple test application (which can be found here with the database schema creation scripts). It counts the number of rows in a table then inserts 10,000 rows into that table using a stored procedure. The code has been written in such a way that the same C# code is being used for each of the 4 tests to eliminate it as the cause of the bad performance. Here are the results:
Count rows in table Insert Insert Insert
Connection Type (Oracle) Duration (ms) Duration(ms) Per Row(ms) Rows/sec
--------------------------------------------------- --------------------- --------------- --------------- --------
Oracle.ManagedDataAccess.Client.OracleConnection 34 6,741 0.674 1,483
Devart.Data.Universal.UniConnection(Oracle) 69 17,498 1.750 571
--------------------------------------------------- --------------------- --------------- --------------- --------
Difference 35 10,757 1.076 -912
Count rows in table Insert Insert Insert
Connection Type (PostgreSQL) Duration (ms) Duration(ms) Per Row(ms) Rows/sec
--------------------------------------------------- --------------------- --------------- --------------- --------
Npgsql.NpgsqlConnection 8 6,136 0.614 1,630
Devart.Data.Universal.UniConnection(PostgreSQL) 29 11,187 1.119 894
--------------------------------------------------- --------------------- --------------- --------------- --------
Difference 21 5,051 0.505 -736
This is the last result after a 3 runs, the table is truncated before each test run.
From a developer's point of view (i.e. IMHO), the dotConnect Universal is a brilliant product allowing me to target multiple databases from a single .NET code base with very little effort. But there always seems to be be a downside, no free lunch it seems and for dotConnect Universal it is its performance. So is this the price you pay for supporting multiple databases or is there something I am missing?
P.S. I have asked Devart for support on this but no response (yet) after a week.
I have a strange issue with a query running in my JDeveloper ADF web application. It is a simple search form issuing a select statement to Oracle 10g database. When the search is submitted, ADF framework is (first) running the query, and (second) running the same query wrapped within "select count(1) from (...query...)" - the goal here is to obtain the total number of rows, and to display the "Next 10 results" navigation controls.
So far, so good. Trouble comes from the outrageous performance I am getting from the second query (the one with "count(1)" in it). To investigate the issue, I copied/pasted/ran the query in SQL Developer and was surprised to see much better response.
When comparing the query execution in ADF and SQL Developer, I took all measures to ensure representative environment for both executions:
- freshly restarted database
- same for the OC4J
This way I can be sure that the difference is not related to caching and/or buffering, in both cases the db and the application server were freshly (re)started.
The traces I took for both sessions illustrate the situation:
Query ran in ADF:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.97 0.97 0 0 0 0
Fetch 1 59.42 152.80 35129 1404149 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 60.39 153.77 35129 1404149 0 1
Same query in SQL Developer:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.02 1.16 0 0 0 0
Fetch 1 1.04 3.28 4638 4567 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 2.07 4.45 4638 4567 0 1
Thanks in advance for any comments or suggestions!
Ok, I finally found the explanation of this ghastly behaviour. To make the long story short, the answer is in the definition (Tuning parameters) of my ViewObject in JDeveloper. What I was missing were these two important parameters:
FetchMode="FETCH_AS_NEEDED"
FetchSize="10"
Without them, the following happens - ADF runs the main query, binds the variables and fetches the results. Then, in an attempt to make an estimate of the rowcount, it launches the same query enclosed in "select count(1) from (my_query)", but ...(drum roll)... WITHOUT BINDING THE VARIABLES!!! It really beats me what is the use of estimating the rowcount without taking into account the actual values of the bind variables!
Anyway, it's all in the definition of the ViewObject: the following settings needed to be set, in order to get the expected behaviour:
All Rows in Batches of: 10
(checked) As Needed
(unchecked) Fill Last Page of Rows when Paging through Rowset
The execution plan could not help me (it was identical for both ADF and SQL Developer), the difference was only visible in a trace file taken with binds.
So, now my problem is solved - thanks to all for the tips that finally led me to the resolution!
The query with count is slower because it has to read all the data (to count it).
When you run the other query, you are only fetching a first page of data, so the execution (reading from the cursor) can stop after you have your first ten results.
Try loading to 100th page with your first query, it will likely be much slower than the first page.
If selecting a count online is too expensive, a common trick is to select one item more than you need (11 in your case) to determine if there is more data. You cannot show a page count, but at least a "next page" button.
Update: Are you saying the count query is only slow when run through ADF, but fast through SQL Developer?
If it is the same query, i can think of:
Different settings in ADF vs SQL Developer (have you tried with SQL*Plus?)
Binding variables of incorrect type in the slow case
But without the execution plans or the SQL, it is hard to say
Over the years I've found that "SELECT COUNT..." is often a source of unexpected slowdowns.
If I understand the results posted above, the query takes 153 seconds from JDeveloper, but only about 4.5 seconds from SQL Developer, and you're going to use this query to determine if the "Next 10 Results" control should be displayed.
I don't know that it matters if the runtime is 4.5 seconds or 153 seconds - even the best case seems rather slow for initializing a page. Assume for a moment that you can get the query to respond in 4.5 seconds when submitted from the page - that's still a long time to make a user sit and wait when they're only a mouse-click away from going off to do Something Else. In that same 4.5 seconds the app might be able to fetch enough data to load the page a few times.
I think #Thilo's idea of fetching one more record than is needed to fill the page to determine if there is more data available is a good one. Perhaps this could be adapted to your situation?
Share and enjoy.