We are using oracle 12.1 at work. We have issues with the oracle session timing out after the 15 minute mark even after executing sqls(select/update/insert/delete). The sqls are not long running(usually executes in less than 10 seconds)It does not do this consistently and it does not seem to matter which tool we use(SQLplus, PL/SQL, Advanced Query Tool, Powerbuilder).
Please note that the timeout is set to 15 mins and we cannot change this due to federal regulations. Any hints or suggestions would be greatly appreciated.
Related
I have a SQL query that fetches roughly 200 columns from multiple tables and normally runs in a matter of minutes.
A Java program kicked off by cron calls the SQL every 4 hours, but occasionally hangs forever(=not fetching any data. Neither updates nor inserts are involved).
Here are some outputs from V$SESSION.
STATUS: ACTIVE
ROW_WAIT_OBJ#: 22392 ←not changing
ROW_WAIT_FILE#: 6 ←not changing
ROW_WAIT_BLOCK#: 8896642 ←not changing
ROW_WAIT_ROW#: 0 ←not changing
LAST_CALL_ET: 5632 ←keeps incresing
★No other heavy SQL queries are running at the same time
What could be the cause of this and what should I look into to solve it?
You can use TKPROF or SQL Profiler. This reports can help you. We can not replay your question now.
If you attach your tuning reports, we can help you. Because many things can cause performance problems. A comprehensive study is needed to understand this.
Follow this link;
https://docs.oracle.com/cd/E11882_01/server.112/e41573/perf_overview.htm
This may have been asked numerous times but none of them helped me so far.
Here's some history:
QueryTimeOut: 120 secs
Database:DB2
App Server: JBoss
Framework: Struts 2
I've one query which fetches around a million records. Yes, we need to fetch it all at once for caching purpose, sadly can't change the design.
Now, we've 2 servers Primary and DR. In DR server, the query is getting executed within 30 secs, so no timeout issue there. But in Primary serverit is getting time out due to some unknown reason. Sometimes it is getting timed out in rs.next() and sometime in pstmt.executeQuery().
All DB indexes, connection pool etc are in place. The explain plan shows, there are no full table scan as well.
My Analysis:
Since query is not the issue here, there might be issue in Network delay?
How can I get the root cause behind this timeout. How can I make sure there are no connection leakage? (Since all connection are closed properly).
Any way to recover from the timeout and again execute the query with increased timeout value for e.g: pstmt.setQueryTimeOut(600)? <- Note that this has no effect whatsoever. Don't know why..!
Appreciate any inputs.
Thank You!
I wrote an application that queries oracle v$sqlarea and dumps data to my own database for further analysis. I noticed something very strange - sometimes data in the v$sqlarea shows less executions than before. I'm pretty sure that the oracle cache was not cleaned (the first load time of query is still the same, and since I query oracle each minute I dont believe that in this one minute the query was executed 100k+ times).
Can anybody explain how this is possible?
Ok, so I asked this on Oracle forum as well, and I believe that the correct answer is this one
https://community.oracle.com/message/12980175#12980175
Is it only the explain plan to look in when tuning a large sql string?
Because when I push Ctrl+E in TOAD for Oracle (which generates explain plan), this takes several seconds. Does TOAD do anything more than generating the explain plan, or is it really so that the parse phase takes 2-3 seconds for that specific sql statement?
I really can't see how to optimize the sql string anymore when looking at the explain plan. So I thought maybe there is something going on BEFORE executing the plan?
thanks in advance
Martin (newbie oracle tuning expert)
The explain plan doesn't tell you everything - Oracle just tells you an estimate of the costs of your query.
To get the real costs of your query, you have to actually execute it and check the performance afterwards (e.g. using tkprof).
I'd recommend checking out Asktom, e.g.
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:8764517459743
and getting a good book on Oracle performance tuning (e.g. "Effective Oracle by Design" by Tom Kyte).
I am trying to track performance on some procedures that run too slow (and seem to keep getting slower). I am using v$session_longops to track how much work has been done, and I have a query (sofar/((v$session_longops.LAST_UPDATE_TIME-v$session_longops.start_time)*24*60*60)) that tells me the rate at which work is being done.
What I'd like to be able to do is capture the rate at which work is being done and how it changes over time. Right now, I just re-execute the query manually, and then copy/paste to Excel. Not very optimal, especially when the phone rings or something else happens to interrupt my sampling frequency.
Is there a way to have script in SQL*Plus run a query evern n seconds, spool the results to a file, and then continue doing this until the job ends?
(Oracle 10g)
Tanel Poder's snapper script does a wonderful job of actively monitoring performance.
It has parameters for
<seconds_in_snap> - the number of seconds between taking snapshots
<snapshot_count> - the number of snapshots to take ( maximum value is power(2,31)-1 )
It uses PL/SQL and a call to DBMS_LOCK.SLEEP
If you can live with running PL/SQL instead of a SQL*Plus script, you could consider using the Oracle Scheduler. See chapters 26, 27, and 28 of the Oracle Database Administrator's Guide.