Is it possible to show other processes in progress on an Oracle database? Something like Sybases sp_who
I suspect you would just want to grab a few columns from V$SESSION and the SQL statement from V$SQL. Assuming you want to exclude the background processes that Oracle itself is running
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text
FROM v$session sess,
v$sql sql
WHERE sql.sql_id(+) = sess.sql_id
AND sess.type = 'USER'
The outer join is to handle those sessions that aren't currently active, assuming you want those. You could also get the sql_fulltext column from V$SQL which will have the full SQL statement rather than the first 1000 characters, but that is a CLOB and so likely a bit more complicated to deal with.
Realistically, you probably want to look at everything that is available in V$SESSION because it's likely that you can get a lot more information than SP_WHO provides.
After looking at sp_who, Oracle does not have that ability per se. Oracle has at least 8 processes running which run the db. Like RMON etc.
You can ask the DB which queries are running as that just a table query. Look at the V$ tables.
Quick Example:
SELECT sid,
opname,
sofar,
totalwork,
units,
elapsed_seconds,
time_remaining
FROM v$session_longops
WHERE sofar != totalwork;
This one shows SQL that is currently "ACTIVE":-
select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece
/
This shows locks. Sometimes things are going slow, but it's because it is blocked waiting for a lock:
select
object_name,
object_type,
session_id,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request,
block,
ctime -- Time since current mode was granted
from
v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id AND
v$lock.id1 = all_objects.object_id AND
v$lock.sid = v$locked_object.session_id
order by
session_id, ctime desc, object_name
/
This is a good one for finding long operations (e.g. full table scans). If it is because of lots of short operations, nothing will show up.
COLUMN percent FORMAT 999.99
SELECT sid, to_char(start_time,'hh24:mi:ss') stime,
message,( sofar/totalwork)* 100 percent
FROM v$session_longops
WHERE sofar/totalwork < 1
/
Keep in mind that there are processes on the database which may not currently support a session.
If you're interested in all processes you'll want to look to v$process (or gv$process on RAC)
Related
I have a report generate using Oracle Report Builder. The report has 82 SQL queries. Almost every queries perform heavy calculation. I'm doing financial report that has double entry for accounting. Sometimes, when I generate the report, the entries is not tally. And sometimes it's good. It looks like it's not doing it in "transactional way". Because it seems the data is keep running while generating the report.
I'm curious how the report execute the SQL query? Is it one by one or the whole report? How can I debug or see what query is executing?
Try using below query to check the active SQl running
select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece
Oracle Reports will issue your 82 different queries, as needed, according to the relationships between them in your Oracle Reports data model.
By default, in Oracle, you only get read consistency within a single SQL statement -- and that is your problem.
For example, suppose you have query Q_ACCOUNTS, which lists your chart of accounts, and query Q_JOURNAL_ENTRIES, which summarizes the journal entries made to a given account. In your Oracle Reports data model, suppose Q_JOURNAL_ENTRIES is linked to Q_ACCOUNTS.
In this case, Oracle reports will run Q_ACCOUNTS and then run Q_JOURNAL_ENTRIES once for each account. And here is the key point: there is no read consistency between the multiple executions of Q_JOURNAL_ENTRIES (nor is there consistence with Q_ACCOUNTS, for that matter).
So, if an accounting entry is made to debit account A and credit account B, and that entry is made after Q_JOURNAL_ENTRIES has run for A and before it has run for B, your report will only include the credit to B. And, so, your report will not add up.
I have never done it, but you might try to run a SET TRANSACTION READ ONLY SQL command in your "Before Report" trigger. This can give you transaction-level read consistency, which is what you need, but it comes with limitations (mainly, you cannot perform any database writes, as the name implies).
maybe you could try the "longops"
SELECT s.sid,
s.serial#,
sl.target, sl.OPNAME, sl.SQL_PLAN_OPERATION as OPERATION, sl.SQL_PLAN_OPTIONS as options,
ROUND(sl.elapsed_seconds/60) || ':' || MOD(sl.elapsed_seconds,60) elapsed,
ROUND(sl.time_remaining/60) || ':' || MOD(sl.time_remaining,60) remaining,
ROUND(sl.sofar/decode(sl.totalwork,0, decode(sl.sofar, 0,1), sl.totalwork )*100, 2) progress_pct, s.INST_ID , s.machine
FROM gv$session s,
v$session_longops sl
WHERE s.sid = sl.sid
AND s.serial# = sl.serial#(+)
and sl.elapsed_seconds(+) <> 0
ORDER BY ROUND(sl.sofar/decode(sl.totalwork,0, decode(sl.sofar, 0,1), sl.totalwork )*100, 2)
I was in our Oracle DB and saw this in the messages.
select 1 from sys.obj$ where 1=0;
I'm curious as to what it does. Is it just a session being initiated, a check to see if there is a sign of life?
That query is automatically generated by Oracle SQL Developer, it's nothing nefarious.
I can't tell exactly what the query is used for. But when I looked for it on a few hundred of our databases I found about 20 rows for completely unrelated users and databases. The only thing they had in common was the MODULE was set to "SQL Developer".
select executions, parsing_schema_name, module, first_load_time
from gv$sql
where sql_text = 'select 1 from sys.obj$ where 1=0';
Further queries on GV$SQL and DBA_AUDIT_TRAIL show other boring data dictionary queries being run at the same time. Which leads me to believe it's one of a set of background queries run for some Oracle SQL Developer feature.
select executions, parsing_schema_name, first_load_time, gv$sql.*
from gv$sql
where parsing_schema_name = '<user from above>'
order by gv$sql.first_load_time desc;
Why am I getting this database error when I update a table?
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
Your table is already locked by some query. For example, you may have executed "select for update" and have not yet committed/rollbacked and fired another select query. Do a commit/rollback before executing your query.
from here ORA-00054: resource busy and acquire with NOWAIT specified
You can also look up the sql,username,machine,port information and get to the actual process which holds the connection
SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME,
S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT
FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S,
V$PROCESS P, V$SQL SQ
WHERE L.OBJECT_ID = O.OBJECT_ID
AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR
AND S.SQL_ADDRESS = SQ.ADDRESS;
Please Kill Oracle Session
Use below query to check active session info
SELECT
O.OBJECT_NAME,
S.SID,
S.SERIAL#,
P.SPID,
S.PROGRAM,
SQ.SQL_FULLTEXT,
S.LOGON_TIME
FROM
V$LOCKED_OBJECT L,
DBA_OBJECTS O,
V$SESSION S,
V$PROCESS P,
V$SQL SQ
WHERE
L.OBJECT_ID = O.OBJECT_ID
AND L.SESSION_ID = S.SID
AND S.PADDR = P.ADDR
AND S.SQL_ADDRESS = SQ.ADDRESS;
kill like
alter system kill session 'SID,SERIAL#';
(For example, alter system kill session '13,36543';)
Reference
http://abeytom.blogspot.com/2012/08/finding-and-fixing-ora-00054-resource.html
There is a very easy work around for this problem.
If you run a 10046 trace on your session (google this... too much to explain). You will see that before any DDL operation Oracle does the following:
LOCK TABLE 'TABLE_NAME' NO WAIT
So if another session has an open transaction you get an error. So the fix is... drum roll please. Issue your own lock before the DDL and leave out the 'NO WAIT'.
Special Note:
if you are doing splitting/dropping partitions oracle just locks the partition.
-- so yo can just lock the partition subpartition.
So...
The following steps fix the problem.
LOCK TABLE 'TABLE NAME'; -- you will 'wait' (developers call this hanging). until the session with the open transaction, commits. This is a queue. so there may be several sessions ahead of you. but you will NOT error out.
Execute DDL. Your DDL will then run a lock with the NO WAIT. However, your session has aquired the lock. So you are good.
DDL auto-commits. This frees the locks.
DML statements will 'wait' or as developers call it 'hang' while the table is locked.
I use this in code that runs from a job to drop partitions. It works fine. It is in a database that is constantly inserting at a rate of several hundred inserts/second. No errors.
if you are wondering. Doing this in 11g. I have done this in 10g before as well in the past.
This error happens when the resource is busy. Check if you have any referential constraints in the query. Or even the tables that you have mentioned in the query may be busy. They might be engaged with some other job which will be definitely listed in the following query results:
SELECT * FROM V$SESSION WHERE STATUS = 'ACTIVE'
Find the SID,
SELECT * FROM V$OPEN_CURSOR WHERE SID = --the id
In my case, I was quite sure it was one of my own sessions which was blocking. Therefore, it was safe to do the following:
I found the offending session with:
SELECT * FROM V$SESSION WHERE OSUSER='my_local_username';
The session was inactive, but it still held the lock somehow. Note, that you may need to use some other WHERE condition in your case (e.g. try USERNAME or MACHINE fields).
Killed the session using the ID and SERIAL# acquired above:
alter system kill session '<id>, <serial#>';
Edited by #thermz: If none of the previous open-session queries work try this one. This query can help you to avoid syntax errors while killing sessions:
SELECT 'ALTER SYSTEM KILL SESSION '''||SID||','||SERIAL#||''' immediate;' FROM V$SESSION WHERE OSUSER='my_local_username_on_OS'
This happens when a session other than the one used to alter a table is holding a lock likely because of a DML (update/delete/insert). If you are developing a new system, it is likely that you or someone in your team issues the update statement and you could kill the session without much consequence. Or you could commit from that session once you know who has the session open.
If you have access to a SQL admin system use it to find the offending session. And perhaps kill it.
You could use v$session and v$lock and others but I suggest you google how to find that session and then how to kill it.
In a production system, it really depends. For oracle 10g and older, you could execute
LOCK TABLE mytable in exclusive mode;
alter table mytable modify mycolumn varchar2(5);
In a separate session but have the following ready in case it takes too long.
alter system kill session '....
It depends on what system do you have, older systems are more likely to not commit every single time. That is a problem since there may be long standing locks. So your lock would prevent any new locks and wait for a lock that who knows when will be released. That is why you have the other statement ready. Or you could look for PLSQL scripts out there that do similar things automatically.
In version 11g there is a new environment variable that sets a wait time. I think it likely does something similar to what I described. Mind you that locking issues don't go away.
ALTER SYSTEM SET ddl_lock_timeout=20;
alter table mytable modify mycolumn varchar2(5);
Finally it may be best to wait until there are few users in the system to do this kind of maintenance.
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;
ALTER SYSTEM KILL SESSION 'sid,serial#';
As mentioned in other answers, this error is caused by concurrent DML operations running in other sessions. This causes Oracle to fail to lock the table for DDL with the default NOWAIT option.
For those without admin permissions in the database or who cannot kill/interrupt the other sessions, you can also precede your DDL operation with:
alter session set DDL_LOCK_TIMEOUT = 30;
--Run your DDL command, e.g.: alter table, etc.
I was receiving this error repeatedly in a database with background jobs doing large insert/update operations, and altering this parameter in the session allowed the DDL to continue after a few seconds of waiting for the lock.
For further information, see the comment from rshdev on this answer, this entry on oracle-base or the official docs on DDL_LOCK_TIMEOUT.
Just check for process holding the session and Kill it. Its back to normal.
Below SQL will find your process
SELECT s.inst_id,
s.sid,
s.serial#,
p.spid,
s.username,
s.program FROM gv$session s
JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id;
Then kill it
ALTER SYSTEM KILL SESSION 'sid,serial#'
OR
some example I found online seems to need the instance id as well
alter system kill session '130,620,#1';
I had this error happen when I had 2 scripts I was running. I had:
A SQL*Plus session connected directly using a schema user account (account #1)
Another SQL*Plus session connected using a different schema user account (account #2), but connecting across a database link as the first account
I ran a table drop, then table creation as account #1.
I ran a table update on account #2's session. Did not commit changes.
Re-ran table drop/creation script as account #1. Got error on the drop table x command.
I solved it by running COMMIT; in the SQL*Plus session of account #2.
Your problem looks like you are mixing DML & DDL operations. See this URL which explains this issue:
http://www.orafaq.com/forum/t/54714/2/
I managed to hit this error when simply creating a table! There was obviously no contention problem on a table that didn't yet exist. The CREATE TABLE statement contained a CONSTRAINT fk_name FOREIGN KEY clause referencing a well-populated table. I had to:
Remove the FOREIGN KEY clause from the CREATE TABLE statement
Create an INDEX on the FK column
Create the FK
I solved this problem by closing one of my IDE tabs.
PL/SQL Developer
Version 10.0.5.1710
I also face the similar Issue. Nothing programmer has to do to resolve this error. I informed to my oracle DBA team. They kill the session and worked like a charm.
Solution given by Shashi's link is the best... no needs to contact dba or someone else
make a backup
create table xxxx_backup as select * from xxxx;
delete all rows
delete from xxxx;
commit;
insert your backup.
insert into xxxx (select * from xxxx_backup);
commit;
Why am I getting this database error when I update a table?
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
Your table is already locked by some query. For example, you may have executed "select for update" and have not yet committed/rollbacked and fired another select query. Do a commit/rollback before executing your query.
from here ORA-00054: resource busy and acquire with NOWAIT specified
You can also look up the sql,username,machine,port information and get to the actual process which holds the connection
SELECT O.OBJECT_NAME, S.SID, S.SERIAL#, P.SPID, S.PROGRAM,S.USERNAME,
S.MACHINE,S.PORT , S.LOGON_TIME,SQ.SQL_FULLTEXT
FROM V$LOCKED_OBJECT L, DBA_OBJECTS O, V$SESSION S,
V$PROCESS P, V$SQL SQ
WHERE L.OBJECT_ID = O.OBJECT_ID
AND L.SESSION_ID = S.SID AND S.PADDR = P.ADDR
AND S.SQL_ADDRESS = SQ.ADDRESS;
Please Kill Oracle Session
Use below query to check active session info
SELECT
O.OBJECT_NAME,
S.SID,
S.SERIAL#,
P.SPID,
S.PROGRAM,
SQ.SQL_FULLTEXT,
S.LOGON_TIME
FROM
V$LOCKED_OBJECT L,
DBA_OBJECTS O,
V$SESSION S,
V$PROCESS P,
V$SQL SQ
WHERE
L.OBJECT_ID = O.OBJECT_ID
AND L.SESSION_ID = S.SID
AND S.PADDR = P.ADDR
AND S.SQL_ADDRESS = SQ.ADDRESS;
kill like
alter system kill session 'SID,SERIAL#';
(For example, alter system kill session '13,36543';)
Reference
http://abeytom.blogspot.com/2012/08/finding-and-fixing-ora-00054-resource.html
There is a very easy work around for this problem.
If you run a 10046 trace on your session (google this... too much to explain). You will see that before any DDL operation Oracle does the following:
LOCK TABLE 'TABLE_NAME' NO WAIT
So if another session has an open transaction you get an error. So the fix is... drum roll please. Issue your own lock before the DDL and leave out the 'NO WAIT'.
Special Note:
if you are doing splitting/dropping partitions oracle just locks the partition.
-- so yo can just lock the partition subpartition.
So...
The following steps fix the problem.
LOCK TABLE 'TABLE NAME'; -- you will 'wait' (developers call this hanging). until the session with the open transaction, commits. This is a queue. so there may be several sessions ahead of you. but you will NOT error out.
Execute DDL. Your DDL will then run a lock with the NO WAIT. However, your session has aquired the lock. So you are good.
DDL auto-commits. This frees the locks.
DML statements will 'wait' or as developers call it 'hang' while the table is locked.
I use this in code that runs from a job to drop partitions. It works fine. It is in a database that is constantly inserting at a rate of several hundred inserts/second. No errors.
if you are wondering. Doing this in 11g. I have done this in 10g before as well in the past.
This error happens when the resource is busy. Check if you have any referential constraints in the query. Or even the tables that you have mentioned in the query may be busy. They might be engaged with some other job which will be definitely listed in the following query results:
SELECT * FROM V$SESSION WHERE STATUS = 'ACTIVE'
Find the SID,
SELECT * FROM V$OPEN_CURSOR WHERE SID = --the id
In my case, I was quite sure it was one of my own sessions which was blocking. Therefore, it was safe to do the following:
I found the offending session with:
SELECT * FROM V$SESSION WHERE OSUSER='my_local_username';
The session was inactive, but it still held the lock somehow. Note, that you may need to use some other WHERE condition in your case (e.g. try USERNAME or MACHINE fields).
Killed the session using the ID and SERIAL# acquired above:
alter system kill session '<id>, <serial#>';
Edited by #thermz: If none of the previous open-session queries work try this one. This query can help you to avoid syntax errors while killing sessions:
SELECT 'ALTER SYSTEM KILL SESSION '''||SID||','||SERIAL#||''' immediate;' FROM V$SESSION WHERE OSUSER='my_local_username_on_OS'
This happens when a session other than the one used to alter a table is holding a lock likely because of a DML (update/delete/insert). If you are developing a new system, it is likely that you or someone in your team issues the update statement and you could kill the session without much consequence. Or you could commit from that session once you know who has the session open.
If you have access to a SQL admin system use it to find the offending session. And perhaps kill it.
You could use v$session and v$lock and others but I suggest you google how to find that session and then how to kill it.
In a production system, it really depends. For oracle 10g and older, you could execute
LOCK TABLE mytable in exclusive mode;
alter table mytable modify mycolumn varchar2(5);
In a separate session but have the following ready in case it takes too long.
alter system kill session '....
It depends on what system do you have, older systems are more likely to not commit every single time. That is a problem since there may be long standing locks. So your lock would prevent any new locks and wait for a lock that who knows when will be released. That is why you have the other statement ready. Or you could look for PLSQL scripts out there that do similar things automatically.
In version 11g there is a new environment variable that sets a wait time. I think it likely does something similar to what I described. Mind you that locking issues don't go away.
ALTER SYSTEM SET ddl_lock_timeout=20;
alter table mytable modify mycolumn varchar2(5);
Finally it may be best to wait until there are few users in the system to do this kind of maintenance.
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;
ALTER SYSTEM KILL SESSION 'sid,serial#';
As mentioned in other answers, this error is caused by concurrent DML operations running in other sessions. This causes Oracle to fail to lock the table for DDL with the default NOWAIT option.
For those without admin permissions in the database or who cannot kill/interrupt the other sessions, you can also precede your DDL operation with:
alter session set DDL_LOCK_TIMEOUT = 30;
--Run your DDL command, e.g.: alter table, etc.
I was receiving this error repeatedly in a database with background jobs doing large insert/update operations, and altering this parameter in the session allowed the DDL to continue after a few seconds of waiting for the lock.
For further information, see the comment from rshdev on this answer, this entry on oracle-base or the official docs on DDL_LOCK_TIMEOUT.
Just check for process holding the session and Kill it. Its back to normal.
Below SQL will find your process
SELECT s.inst_id,
s.sid,
s.serial#,
p.spid,
s.username,
s.program FROM gv$session s
JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id;
Then kill it
ALTER SYSTEM KILL SESSION 'sid,serial#'
OR
some example I found online seems to need the instance id as well
alter system kill session '130,620,#1';
I had this error happen when I had 2 scripts I was running. I had:
A SQL*Plus session connected directly using a schema user account (account #1)
Another SQL*Plus session connected using a different schema user account (account #2), but connecting across a database link as the first account
I ran a table drop, then table creation as account #1.
I ran a table update on account #2's session. Did not commit changes.
Re-ran table drop/creation script as account #1. Got error on the drop table x command.
I solved it by running COMMIT; in the SQL*Plus session of account #2.
Your problem looks like you are mixing DML & DDL operations. See this URL which explains this issue:
http://www.orafaq.com/forum/t/54714/2/
I managed to hit this error when simply creating a table! There was obviously no contention problem on a table that didn't yet exist. The CREATE TABLE statement contained a CONSTRAINT fk_name FOREIGN KEY clause referencing a well-populated table. I had to:
Remove the FOREIGN KEY clause from the CREATE TABLE statement
Create an INDEX on the FK column
Create the FK
I solved this problem by closing one of my IDE tabs.
PL/SQL Developer
Version 10.0.5.1710
I also face the similar Issue. Nothing programmer has to do to resolve this error. I informed to my oracle DBA team. They kill the session and worked like a charm.
Solution given by Shashi's link is the best... no needs to contact dba or someone else
make a backup
create table xxxx_backup as select * from xxxx;
delete all rows
delete from xxxx;
commit;
insert your backup.
insert into xxxx (select * from xxxx_backup);
commit;
How can I find poor performing SQL queries in Oracle?
Oracle maintains statistics on shared SQL area and contains one row per SQL string(v$sqlarea).
But how can we identify which one of them are badly performing?
I found this SQL statement to be a useful place to start (sorry I can't attribute this to the original author; I found it somewhere on the internet):
SELECT * FROM
(SELECT
sql_fulltext,
sql_id,
elapsed_time,
child_number,
disk_reads,
executions,
first_load_time,
last_load_time
FROM v$sql
ORDER BY elapsed_time DESC)
WHERE ROWNUM < 10
/
This finds the top SQL statements that are currently stored in the SQL cache ordered by elapsed time. Statements will disappear from the cache over time, so it might be no good trying to diagnose last night's batch job when you roll into work at midday.
You can also try ordering by disk_reads and executions. Executions is useful because some poor applications send the same SQL statement way too many times. This SQL assumes you use bind variables correctly.
Then, you can take the sql_id and child_number of a statement and feed them into this baby:-
SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('&sql_id', &child));
This shows the actual plan from the SQL cache and the full text of the SQL.
You could find disk intensive full table scans with something like this:
SELECT Disk_Reads DiskReads, Executions, SQL_ID, SQL_Text SQLText,
SQL_FullText SQLFullText
FROM
(
SELECT Disk_Reads, Executions, SQL_ID, LTRIM(SQL_Text) SQL_Text,
SQL_FullText, Operation, Options,
Row_Number() OVER
(Partition By sql_text ORDER BY Disk_Reads * Executions DESC)
KeepHighSQL
FROM
(
SELECT Avg(Disk_Reads) OVER (Partition By sql_text) Disk_Reads,
Max(Executions) OVER (Partition By sql_text) Executions,
t.SQL_ID, sql_text, sql_fulltext, p.operation,p.options
FROM v$sql t, v$sql_plan p
WHERE t.hash_value=p.hash_value AND p.operation='TABLE ACCESS'
AND p.options='FULL' AND p.object_owner NOT IN ('SYS','SYSTEM')
AND t.Executions > 1
)
ORDER BY DISK_READS * EXECUTIONS DESC
)
WHERE KeepHighSQL = 1
AND rownum <=5;
You could take the average buffer gets per execution during a period of activity of the instance:
SELECT username,
buffer_gets,
disk_reads,
executions,
buffer_get_per_exec,
parse_calls,
sorts,
rows_processed,
hit_ratio,
module,
sql_text
-- elapsed_time, cpu_time, user_io_wait_time, ,
FROM (SELECT sql_text,
b.username,
a.disk_reads,
a.buffer_gets,
trunc(a.buffer_gets / a.executions) buffer_get_per_exec,
a.parse_calls,
a.sorts,
a.executions,
a.rows_processed,
100 - ROUND (100 * a.disk_reads / a.buffer_gets, 2) hit_ratio,
module
-- cpu_time, elapsed_time, user_io_wait_time
FROM v$sqlarea a, dba_users b
WHERE a.parsing_user_id = b.user_id
AND b.username NOT IN ('SYS', 'SYSTEM', 'RMAN','SYSMAN')
AND a.buffer_gets > 10000
ORDER BY buffer_get_per_exec DESC)
WHERE ROWNUM <= 20
It depends which version of oracle you have, for 9i and below Statspack is what you are after, 10g and above, you want awr , both these tools will give you the top sql's and lots of other stuff.
the complete information one that I got from askTom-Oracle. I hope it helps you
select *
from v$sql
where buffer_gets > 1000000
or disk_reads > 100000
or executions > 50000
The following query returns SQL statements that perform large numbers of disk reads (also includes the offending user and the number of times the query has been run):
SELECT t2.username, t1.disk_reads, t1.executions,
t1.disk_reads / DECODE(t1.executions, 0, 1, t1.executions) as exec_ratio,
t1.command_type, t1.sql_text
FROM v$sqlarea t1, dba_users t2
WHERE t1.parsing_user_id = t2.user_id
AND t1.disk_reads > 100000
ORDER BY t1.disk_reads DESC
Run the query as SYS and adjust the number of disk reads depending on what you deem to be excessive (100,000 works for me).
I have used this query very recently to track down users who refuse to take advantage of Explain Plans before executing their statements.
I found this query in an old Oracle SQL tuning book (which I unfortunately no longer have), so apologies, but no attribution.
There are a number of possible ways to do this, but have a google for tkprof
There's no GUI... it's entirely command line and possibly a touch intimidating for Oracle beginners; but it's very powerful.
This link looks like a good start:
http://www.oracleutilities.com/OSUtil/tkprof.html
While searching I got the following query which does the job with one assumption(query execution time >6 seconds)
SELECT username, sql_text, sofar, totalwork, units
FROM v$sql,v$session_longops
WHERE sql_address = address AND sql_hash_value = hash_value
ORDER BY address, hash_value, child_number;
I think above query will list the details for current user.
Comments are welcome!!