How to check Oracle database for long running queries - oracle

My application, which uses an Oracle database, is going slow or appears to have stopped completely.
How can find out which queries are most expensive, so I can investigate further?

This one shows SQL that is currently "ACTIVE":-
select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece
/
This shows locks. Sometimes things are going slow, but it's because it is blocked waiting for a lock:
select
object_name,
object_type,
session_id,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request,
block,
ctime -- Time since current mode was granted
from
v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id AND
v$lock.id1 = all_objects.object_id AND
v$lock.sid = v$locked_object.session_id
order by
session_id, ctime desc, object_name
/
This is a good one for finding long operations (e.g. full table scans). If it is because of lots of short operations, nothing will show up.
COLUMN percent FORMAT 999.99
SELECT sid, to_char(start_time,'hh24:mi:ss') stime,
message,( sofar/totalwork)* 100 percent
FROM v$session_longops
WHERE sofar/totalwork < 1
/

Try this, it will give you queries currently running for more than 60 seconds. Note that it prints multiple lines per running query if the SQL has multiple lines. Look at the sid,serial# to see what belongs together.
select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
join v$sqltext_with_newlines q
on s.sql_address = q.address
where status='ACTIVE'
and type <>'BACKGROUND'
and last_call_et> 60
order by sid,serial#,q.piece

v$session_longops
If you look for sofar != totalwork you'll see ones that haven't completed, but the entries aren't removed when the operation completes so you can see a lot of history there too.

Step 1:Execute the query
column username format 'a10'
column osuser format 'a10'
column module format 'a16'
column program_name format 'a20'
column program format 'a20'
column machine format 'a20'
column action format 'a20'
column sid format '9999'
column serial# format '99999'
column spid format '99999'
set linesize 200
set pagesize 30
select
a.sid,a.serial#,a.username,a.osuser,c.start_time,
b.spid,a.status,a.machine,
a.action,a.module,a.program
from
v$session a, v$process b, v$transaction c,
v$sqlarea s
Where
a.paddr = b.addr
and a.saddr = c.ses_addr
and a.sql_address = s.address (+)
and to_date(c.start_time,'mm/dd/yy hh24:mi:ss') <= sysdate - (15/1440) -- running for 15 minutes
order by c.start_time
/
Step 2: desc v$session
Step 3:select sid, serial#,SQL_ADDRESS, status,PREV_SQL_ADDR from v$session where sid='xxxx' //(enter the sid value)
Step 4: select sql_text from v$sqltext where address='XXXXXXXX';
Step 5: select piece, sql_text from v$sqltext where address='XXXXXX' order by piece;

You can use the v$sql_monitor view to find queries that are running longer than 5 seconds. This may only be available in Enterprise versions of Oracle. For example this query will identify slow running queries from my TEST_APP service:
select to_char(sql_exec_start, 'dd-Mon hh24:mi'), (elapsed_time / 1000000) run_time,
cpu_time, sql_id, sql_text
from v$sql_monitor
where service_name = 'TEST_APP'
order by 1 desc;
Note elapsed_time is in microseconds so / 1000000 to get something more readable

You can generate an AWR (automatic workload repository) report from the database.
Run from the SQL*Plus command line:
SQL> #$ORACLE_HOME/rdbms/admin/awrrpt.sql
Read the document related to how to generate & understand an AWR report. It will give a complete view of database performance and resource issues. Once we are familiar with the AWR report it will be helpful to find Top SQL which is consuming resources.
Also, in the 12C EM Express UI we can generate an AWR.

You can check the long-running queries details like % completed and remaining time using the below query:
SELECT SID, SERIAL#, OPNAME, CONTEXT, SOFAR,
TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK != 0
AND SOFAR <> TOTALWORK;
For the complete list of troubleshooting steps, you can check here:Troubleshooting long running sessions

select sq.PARSING_SCHEMA_NAME, sq.LAST_LOAD_TIME, sq.ELAPSED_TIME, sq.ROWS_PROCESSED, ltrim(sq.sql_text), sq.SQL_FULLTEXT
from v$sql sq, v$session se
order by sq.ELAPSED_TIME desc, sq.LAST_LOAD_TIME desc;

Related

How to resolve temp table space issue in Oracle

I need help from DBAs here.
I have a query that fetches around 1800 records from DB.
However, it is observed that oracle's temp table space is getting filled up and makes oracle to respond too slow.
I have identified the query that is causing the issue and the query is something like this.
SELECT * FROM A a, B b WHERE a.id = b.fieldId AND b.col1 = :1 AND b.col2 = :2 ORDER BY TO_NUMBER(b.col3) ASC
This is query is returning around 1800 records and DBA segments show that 44 GB out of 50 GB of data is occupied.
I am not sure what could be solution for this.
I am using Oracle 12.1
Please look into this and suggest if i have to rewrite the query.
Thanks in Advance.
It is hard to how resource intensive query is without checking query plan at least.
This might be not your query at all who "ate" all the TEMP space. Here is how to get top 20 session with highest TEMP usage.
select round(u.blocks*8192/1024/1024,2) "TEMP usage, Mb",
s.sid, s.osuser, s.machine, s.module, s.action, s.status, s.event, s.LAST_CALL_ET, s.WAIT_TIME, s.sql_id, s.sql_child_number
from v$session s,
v$sort_usage u
where s.saddr = u.session_addr
order by u.blocks desc
fetch first 20 rows with ties

Oracle SELECT * FROM LARGE_TABLE - takes minutes to respond

So I have a simple table with 5 or so columns, one of which is a clob containing some JSON data.
I am running
SELECT * FROM BIG_TABLE
SELECT * FROM BIG_TABLE WHERE ROWNUM < 2
SELECT * FROM BIG_TABLE WHERE ROWNUM = 1
SELECT * FROM BIG_TABLE WHERE ID=x
I expect that any fractionally intelligent relational database would return the data immediately. We are not imposing order by/group by clauses, so why not return the data as and when you find it?
Of all the forms of SELECT statements above, only 4. returned in a sub-second manner. This is unexpected for 1-3 which are returning between 1 and 10 minutes before the query shows any responses in SQL Developer. SQL Developer has the standard SQL Array Fetch Size of 50 (JDBC Fetch size of 50 rows) so at a minimum, it is taking 1-10 minutes to return 50 rows from a simple table with no joins on a super high-performance RAC cluster backed by fancy 4-tiered EMC disk subsystem.
Explain plans show a table scan. Fine, but why should I wait 1-10 minutes for the results with rownum in the WHERE clause?
What is going on here?
OK - I found the issue. ROWNUM does not operate like I thought it did and in the code above it never stops the full table scan.
This is because:
RowNum is assigned during the predicate operation (where clause evaluation) and incremented afterwards, i.e.: your row makes it into the result set and then gets rownum assigned.
In order to filter by rownum you need to already have it exist, something like ...
SELECT * FROM (SELECT * FROM BIG_TABLE) WHERE ROWNUM < 1
In effect what this means is that there is no way to filter out the top 5 rows from a table without having first selected the entire table if no other filter criteria are involved.
I solved my problem like this...
SELECT * FROM (SELECT * FROM BIG_TABLE WHERE
DATE_COL BETWEEN :Date1 AND :Date2) WHERE ROWNUM < :x;

Stored procedure hangs occasionally

I have a stored procedure in Oracle 11g that hangs time by time. When this happens I can't recompile it either and the only option I have is to kill SQL Developer process. I agree that the procedure does scan tons of records across different tables, views and materialized views but when there's no such problem it only takes 1-2 seconds to return the result set. I've tried killing all the sessions and even restarting the database but nothing seems to help. And it just gets fixed by itself. I'm posting the procedure content in case you need to see
create or replace
PROCEDURE SP_STAJ_FOR_AGAPUS(
V_SSN IN NUMBER,
V_WEYEARNEW OUT NUMBER,
V_WEMONTHNEW OUT NUMBER,
V_WEDAYNEW OUT NUMBER,
V_LS_YEAR OUT NUMBER,
V_LS_MONTH OUT NUMBER)
AS
BEGIN
SELECT NVL(TRUNC(MDC.DAY_COUNT / 360),0) WEYEARNEW, NVL(TRUNC(MOD(MDC.DAY_COUNT,360) / 30),0)
WEMONTHNEW, NVL(MOD(MOD( MDC.DAY_COUNT,360),30),0) WEDAYNEW,NVL(LS.LS_YEAR,0)LS_YEAR,NVL(
LS.LS_MONTH,0)LS_MONTH
INTO V_WEYEARNEW,V_WEMONTHNEW,V_WEDAYNEW,V_LS_YEAR,V_LS_MONTH
FROM SSPF_CENTRE.PERSONS PER
LEFT JOIN
( SELECT SSN, SUM(DAY_COUNT) DAY_COUNT FROM
( SELECT SSN, YEAR, AG.CHECK_PERIOD_MDSS(SSN,YEAR) DAY_COUNT FROM
( SELECT SSN, YEAR FROM SSPF_CENTRE.PERSON_ACCOUNTS GROUP BY SSN,YEAR
UNION ALL
SELECT SSN, SPECIAL_YEAR YEAR
FROM SSPF_CENTRE.person_accounts_06
GROUP BY SSN,SPECIAL_YEAR
UNION ALL SELECT
P.COMMON_SSN, PA.YEAR FROM SSPF_CENTRE.PERSON_ACCOUNTS PA, SSPF_CENTRE.PERSONS P
WHERE
--COMMON_SSN = V_SSN AND
PA.SSN = P.SSN(+) AND P.COMMON_SSN <> P.SSN GROUP BY P.COMMON_SSN,PA.YEAR
) GROUP BY SSN,YEAR
) GROUP BY SSN
) MDC ON PER.SSN=MDC.SSN
LEFT JOIN
( SELECT SSN, AG.CALCULATE_YEAR(LS_DAYS) LS_YEAR, AG.CALCULATE_MONTH( LS_DAYS) LS_MONTH FROM
( SELECT SSN, GET_DAYS(SSN) LS_DAYS FROM MAT_SERVICE_NEW GROUP BY SSN
)
) LS ON PER.SSN=LS.SSN
WHERE PER.SSN=V_SSN;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
V_WEYEARNEW:=0;
V_WEMONTHNEW:=0;
V_WEDAYNEW:=0;
V_LS_YEAR:=0;
V_LS_MONTH:=0;
END;
END SP_STAJ_FOR_AGAPUS;
This sort of thing can be hard to diagnose even when we're sitting at the server with access to all the tools. Remotely is virtually impossible.
But here are a couple of observations:
in Oracle writers don't block readers. So this is not a locking problem (except, see next point). But perhaps there is something other transaction occurring simultaneously which sucks up all the system resource? You'd need access to V$SESSION at the very least to tell that, and preferably OEM.
You appear to have a couple of functions in your query (AG.CALCULATE_YEAR, GET_DATES, etc). Now they shouldn't be writing database state, but itr would be worthwhile looking at what they do do, in case they have a dependency of particular resources.

How can you get the RAM usage of processes in an oracle11g database?

I want to measure the RAM usage of an sql statement (e.g. a simple create or insert statement) in an oracle 11g database environment.
I tried to get it by using dbms_space, but it seems like that only gets the disk space.
I also found this site:
http://www.dba-oracle.com/m_memory_usage_percent.htm
But the statement
select
*
from
v$sql
where sql_text like {my table}
dont return the create statement.
See comment above:
select operation,
options,
object_name name,
trunc(bytes/1024/1024) "input(MB)",
trunc(last_memory_used/1024) last_mem,
trunc(estimated_optimal_size/1024) opt_mem,
trunc(estimated_onepass_size/1024) onepass_mem,
decode(optimal_executions, null, null,
optimal_executions||'/'||onepass_executions||'/'||
multipasses_executions) "O/1/M"
from v$sql_plan p
, v$sql_workarea w
where p.address=w.address(+)
and p.hash_value=w.hash_value(+)
and p.id=w.operation_id(+)
and p.address= ( select address
from v$sql
where sql_text like '%my_table%' )

How do I get the number of inserts/updates occuring in an Oracle database?

How do I get the total number of inserts/updates that have occurred in an Oracle database over a period of time?
Assuming that you've configured AWR to retain data for all SQL statements (the default is to only retain the top 30 by CPU, elapsed time, etc. if the STATISTICS_LEVEL is 'TYPICAL' and the top 100 if the STATISTICS_LEVEL is 'ALL') via something like
BEGIN
dbms_workload_repository.modify_snapshot_settings (
topnsql => 'MAXIMUM'
);
END;
and assuming that SQL statements don't age out of the cache before a snapshot captures them, you can use the AWR tables for some of this.
You can gather the number of times that an INSERT statement was executed and the number of times that an UPDATE statement was executed
SELECT sum( stat.executions_delta ) insert_executions
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 2;
SELECT sum( stat.executions_delta ) update_executions
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 6;
Note that these queries include both statements that your application issues and statements that Oracle issues in the background. You could add additional criteria if you want to filter out certain SQL statements.
Similarly, you could get the total number of distinct INSERT and UPDATE statements
SELECT count( distinct stat.sql_id ) distinct_insert_stmts
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 2;
SELECT count( distinct stat.sql_id ) distinct_update_stmts
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 6;
Oracle does not, however, track the number of rows that were inserted or updated in a given interval. So you won't be able to get that information from AWR. The closest you could get would be to try to leverage the monitoring Oracle does to determine if statistics are stale. Assuming MONITORING is enabled for each table (it is by default in 11g and I believe it is by default in 10g), i.e.
ALTER TABLE table_name
MONITORING;
Oracle will periodically flush the approximate number of rows that are inserted, updated, and deleted for each table to the SYS.DBA_TAB_MODIFICATIONS table. But this will only show the activity since statistics were gathered on a table, not the activity in a particular interval. You could, however, try to write a process that periodically captured this data to your own table and report off that.
If you instruct Oracle to flush the monitoring information from memory to disk (otherwise there is a lag of up to several hours)
BEGIN
dbms_stats.flush_database_monitoring_info;
END;
you can get an approximate count of the number of rows that have changed in each table since statistics were last gathered
SELECT table_owner,
table_name,
inserts,
updates,
deletes
FROM sys.dba_tab_modifications

Resources