How to resolve temp table space issue in Oracle - oracle

I need help from DBAs here.
I have a query that fetches around 1800 records from DB.
However, it is observed that oracle's temp table space is getting filled up and makes oracle to respond too slow.
I have identified the query that is causing the issue and the query is something like this.
SELECT * FROM A a, B b WHERE a.id = b.fieldId AND b.col1 = :1 AND b.col2 = :2 ORDER BY TO_NUMBER(b.col3) ASC
This is query is returning around 1800 records and DBA segments show that 44 GB out of 50 GB of data is occupied.
I am not sure what could be solution for this.
I am using Oracle 12.1
Please look into this and suggest if i have to rewrite the query.
Thanks in Advance.

It is hard to how resource intensive query is without checking query plan at least.
This might be not your query at all who "ate" all the TEMP space. Here is how to get top 20 session with highest TEMP usage.
select round(u.blocks*8192/1024/1024,2) "TEMP usage, Mb",
s.sid, s.osuser, s.machine, s.module, s.action, s.status, s.event, s.LAST_CALL_ET, s.WAIT_TIME, s.sql_id, s.sql_child_number
from v$session s,
v$sort_usage u
where s.saddr = u.session_addr
order by u.blocks desc
fetch first 20 rows with ties

Related

SQL Server performance using ROWCOUNT

I use SET ROWCOUNT 27900 And then select two columns:
Select
emp.employeeid,
empd.employeedetailid
From
employee emp (NOLOCK)
join
employeedetail empd (NOLOCK) on emp.employeeid = empd.employeeid
This query executes in 3 sec
If I use SET ROWCOUNT 27950 then the same query takes 20 sec to execute.
I am not a sql DBA, why there is a difference of 17 sec for just 50 rows. Is this anything related to page size or index?
Can anyone help me to fine tune the query?
Have you tried doing this using TOP instead of SET ROWCOUNT, and adding an ORDER BY?
SELECT TOP 27900 emp.employeeid, ...
ORDER BY ...;
This will give the optimizer a much better chance at optimizing. In simplest terms, SET ROWCOUNT is applied after the entire query has been processed, and only affects the rows that are sent back to the caller...

How do I get the number of inserts/updates occuring in an Oracle database?

How do I get the total number of inserts/updates that have occurred in an Oracle database over a period of time?
Assuming that you've configured AWR to retain data for all SQL statements (the default is to only retain the top 30 by CPU, elapsed time, etc. if the STATISTICS_LEVEL is 'TYPICAL' and the top 100 if the STATISTICS_LEVEL is 'ALL') via something like
BEGIN
dbms_workload_repository.modify_snapshot_settings (
topnsql => 'MAXIMUM'
);
END;
and assuming that SQL statements don't age out of the cache before a snapshot captures them, you can use the AWR tables for some of this.
You can gather the number of times that an INSERT statement was executed and the number of times that an UPDATE statement was executed
SELECT sum( stat.executions_delta ) insert_executions
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 2;
SELECT sum( stat.executions_delta ) update_executions
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 6;
Note that these queries include both statements that your application issues and statements that Oracle issues in the background. You could add additional criteria if you want to filter out certain SQL statements.
Similarly, you could get the total number of distinct INSERT and UPDATE statements
SELECT count( distinct stat.sql_id ) distinct_insert_stmts
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 2;
SELECT count( distinct stat.sql_id ) distinct_update_stmts
FROM dba_hist_sqlstat stat
JOIN dba_hist_sqltext txt ON (stat.sql_id = txt.sql_id )
JOIN dba_hist_snapshot snap ON (stat.snap_id = snap.snap_id)
WHERE snap.begin_interval_time BETWEEN <<start time>> AND <<end time>>
AND txt.command_type = 6;
Oracle does not, however, track the number of rows that were inserted or updated in a given interval. So you won't be able to get that information from AWR. The closest you could get would be to try to leverage the monitoring Oracle does to determine if statistics are stale. Assuming MONITORING is enabled for each table (it is by default in 11g and I believe it is by default in 10g), i.e.
ALTER TABLE table_name
MONITORING;
Oracle will periodically flush the approximate number of rows that are inserted, updated, and deleted for each table to the SYS.DBA_TAB_MODIFICATIONS table. But this will only show the activity since statistics were gathered on a table, not the activity in a particular interval. You could, however, try to write a process that periodically captured this data to your own table and report off that.
If you instruct Oracle to flush the monitoring information from memory to disk (otherwise there is a lag of up to several hours)
BEGIN
dbms_stats.flush_database_monitoring_info;
END;
you can get an approximate count of the number of rows that have changed in each table since statistics were last gathered
SELECT table_owner,
table_name,
inserts,
updates,
deletes
FROM sys.dba_tab_modifications

How to otimize select from several tables with millions of rows

Have the following tables (Oracle 10g):
catalog (
id NUMBER PRIMARY KEY,
name VARCHAR2(255),
owner NUMBER,
root NUMBER REFERENCES catalog(id)
...
)
university (
id NUMBER PRIMARY KEY,
...
)
securitygroup (
id NUMBER PRIMARY KEY
...
)
catalog_securitygroup (
catalog REFERENCES catalog(id),
securitygroup REFERENCES securitygroup(id)
)
catalog_university (
catalog REFERENCES catalog(id),
university REFERENCES university(id)
)
Catalog: 500 000 rows, catalog_university: 500 000, catalog_securitygroup: 1 500 000.
I need to select any 50 rows from catalog with specified root ordered by name for current university and current securitygroup. There is a query:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Where 100 - some catalog, 200 - some securitygroup, 300 - some university. This query return 50 rows from ~ 170 000 in 3 minutes.
But next query return this rows in 2 sec:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
I build next indexes: (catalog.id, catalog.name, catalog.owner), (catalog_securitygroup.catalog, catalog_securitygroup.index), (catalog_university.catalog, catalog_university.university).
Plan for first query (using PLSQL Developer):
http://habreffect.ru/66c/f25faa5f8/plan2.jpg
Plan for second query:
http://habreffect.ru/f91/86e780cc7/plan1.jpg
What are the ways to optimize the query I have?
The indexes that can be useful and should be considered deal with
WHERE c.root = 100
AND cs.catalog = c.id
AND cs.securitygroup = 200
AND cu.catalog = c.id
AND cu.university = 300
So the following fields can be interesting for indexes
c: id, root
cs: catalog, securitygroup
cu: catalog, university
So, try creating
(catalog_securitygroup.catalog, catalog_securitygroup.securitygroup)
and
(catalog_university.catalog, catalog_university.university)
EDIT:
I missed the ORDER BY - these fields should also be considered, so
(catalog.name, catalog.id)
might be beneficial (or some other composite index that could be used for sorting and the conditions - possibly (catalog.root, catalog.name, catalog.id))
EDIT2
Although another question is accepted I'll provide some more food for thought.
I have created some test data and run some benchmarks.
The test cases are minimal in terms of record width (in catalog_securitygroup and catalog_university the primary keys are (catalog, securitygroup) and (catalog, university)). Here is the number of records per table:
test=# SELECT (SELECT COUNT(*) FROM catalog), (SELECT COUNT(*) FROM catalog_securitygroup), (SELECT COUNT(*) FROM catalog_university);
?column? | ?column? | ?column?
----------+----------+----------
500000 | 1497501 | 500000
(1 row)
Database is postgres 8.4, default ubuntu install, hardware i5, 4GRAM
First I rewrote the query to
SELECT c.id, c.name, c.owner
FROM catalog c, catalog_securitygroup cs, catalog_university cu
WHERE c.root < 50
AND cs.catalog = c.id
AND cu.catalog = c.id
AND cs.securitygroup < 200
AND cu.university < 200
ORDER BY c.name
LIMIT 50 OFFSET 100
note: the conditions are turned into less then to maintain comparable number of intermediate rows (the above query would return 198,801 rows without the LIMIT clause)
If run as above, without any extra indexes (save for PKs and foreign keys) it runs in 556 ms on a cold database (this is actually indication that I oversimplified the sample data somehow - I would be happier if I had 2-4s here without resorting to less then operators)
This bring me to my point - any straight query that only joins and filters (certain number of tables) and returns only a certain number of the records should run under 1s on any decent database without need to use cursors or to denormalize data (one of these days I'll have to write a post on that).
Furthermore, if a query is returning only 50 rows and does simple equality joins and restrictive equality conditions it should run even much faster.
Now let's see if I add some indexes, the biggest potential in queries like this is usually the sort order, so let me try that:
CREATE INDEX test1 ON catalog (name, id);
This makes execution time on the query - 22ms on a cold database.
And that's the point - if you are trying to get only a page of data, you should only get a page of data and execution times of queries such as this on normalized data with proper indexes should take less then 100ms on decent hardware.
I hope I didn't oversimplify the case to the point of no comparison (as I stated before some simplification is present as I don't know the cardinality of relationships between catalog and the many-to-many tables).
So, the conclusion is
if I were you I would not stop tweaking indexes (and the SQL) until I get the performance of the query to go below 200ms as rule of the thumb.
only if I would find an objective explanation why it can't go below such value I would resort to denormalisation and/or cursors, etc...
First I assume that your University and SecurityGroup tables are rather small. You posted the size of the large tables but it's really the other sizes that are part of the problem
Your problem is from the fact that you can't join the smallest tables first. Your join order should be from small to large. But because your mapping tables don't include a securitygroup-to-university table, you can't join the smallest ones first. So you wind up starting with one or the other, to a big table, to another big table and then with that large intermediate result you have to go to a small table.
If you always have current_univ and current_secgrp and root as inputs you want to use them to filter as soon as possible. The only way to do that is to change your schema some. In fact, you can leave the existing tables in place if you have to but you'll be adding to the space with this suggestion.
You've normalized the data very well. That's great for speed of update... not so great for querying. We denormalize to speed querying (that's the whole reason for datawarehouses (ok that and history)). Build a single mapping table with the following columns.
Univ_id, SecGrp_ID, Root, catalog_id. Make it an index organized table of the first 3 columns as pk.
Now when you query that index with all three PK values, you'll finish that index scan with a complete list of allowable catalog Id, now it's just a single join to the cat table to get the cat item details and you're off an running.
The Oracle cost-based optimizer makes use of all the information that it has to decide what the best access paths are for the data and what the least costly methods are for getting that data. So below are some random points related to your question.
The first three tables that you've listed all have primary keys. Do the other tables (catalog_university and catalog_securitygroup) also have primary keys on them?? A primary key defines a column or set of columns that are non-null and unique and are very important in a relational database.
Oracle generally enforces a primary key by generating a unique index on the given columns. The Oracle optimizer is more likely to make use of a unique index if it available as it is more likely to be more selective.
If possible an index that contains unique values should be defined as unique (CREATE UNIQUE INDEX...) and this will provide the optimizer with more information.
The additional indexes that you have provided are no more selective than the existing indexes. For example, the index on (catalog.id, catalog.name, catalog.owner) is unique but is less useful than the existing primary key index on (catalog.id). If a query is written to select on the catalog.name column, it is possible to do and index skip scan but this starts being costly (and most not even be possible in this case).
Since you are trying to select based in the catalog.root column, it might be worth adding an index on that column. This would mean that it could quickly find the relevant rows from the catalog table. The timing for the second query could be a bit misleading. It might be taking 2 seconds to find 50 matching rows from catalog, but these could easily be the first 50 rows from the catalog table..... finding 50 that match all your conditions might take longer, and not just because you need to join to other tables to get them. I would always use create table as select without restricting on rownum when trying to performance tune. With a complex query I would generally care about how long it take to get all the rows back... and a simple select with rownum can be misleading
Everything about Oracle performance tuning is about providing the optimizer enough information and the right tools (indexes, constraints, etc) to do its job properly. For this reason it's important to get optimizer statistics using something like DBMS_STATS.GATHER_TABLE_STATS(). Indexes should have stats gathered automatically in Oracle 10g or later.
Somehow this grew into quite a long answer about the Oracle optimizer. Hopefully some of it answers your question. Here is a summary of what is said above:
Give the optimizer as much information as possible, e.g if index is unique then declare it as such.
Add indexes on your access paths
Find the correct times for queries without limiting by rowwnum. It will always be quicker to find the first 50 M&Ms in a jar than finding the first 50 red M&Ms
Gather optimizer stats
Add unique/primary keys on all tables where they exist.
The use of rownum is wrong and causes all the rows to be processed. It will process all the rows, assigned them all a row number, and then find those between 0 and 50. When you want to look for in the explain plan is COUNT STOPKEY rather than just count
The query below should be an improvement as it will only get the first 50 rows... but there is still the issue of the joins to look at too:
SELECT ccc.* FROM (
SELECT cc.*, ROWNUM AS n FROM (
SELECT c.id, c.name, c.owner
FROM catalog c
WHERE c.root = 100
ORDER BY name
) cc
where rownum <= 50
) ccc WHERE ccc.n > 0 AND ccc.n <= 50;
Also, assuming this for a web page or something similar, maybe there is a better way to handle this than just running the query again to get the data for the next page.
try to declare a cursor. I dont know oracle, but in SqlServer would look like this:
declare #result
table (
id numeric,
name varchar(255)
);
declare __dyn_select_cursor cursor LOCAL SCROLL DYNAMIC for
--Select
select distinct
c.id, c.name
From [catalog] c
inner join university u
on u.catalog = c.id
and u.university = 300
inner join catalog_securitygroup s
on s.catalog = c.id
and s.securitygroup = 200
Where
c.root = 100
Order by name
--Cursor
declare #id numeric;
declare #name varchar(255);
open __dyn_select_cursor;
fetch relative 1 from __dyn_select_cursor into #id,#name declare #maxrowscount int
set #maxrowscount = 50
while (##fetch_status = 0 and #maxrowscount <> 0)
begin
insert into #result values (#id, #name);
set #maxrowscount = #maxrowscount - 1;
fetch next from __dyn_select_cursor into #id, #name;
end
close __dyn_select_cursor;
deallocate __dyn_select_cursor;
--Select temp, final result
select
id,
name
from #result;

Query Optimization

i'm having problem with a query generated by mondrian OLAP Server . it's taking too long. the query look like as follows.
select "TABLE1"."column0" as "c0" from "FACT_TABLE" as "FACT_TABLE",
"TABLE7" as "TABLE7",
"TABLE6" as "TABLE6",
"TABLE5" as "TABLE5",
"TABLE4" as "TABLE4",
"TABLE4" as "TABLE3",
"TABLE2" as "TABLE2",
"TABLE1" as "TABLE2"
where "FACT_TABLE"."column1" = 'VALUE' and
"FACT_TABLE"."column2" =0 and
"FACT_TABLE"."column3" = 0 and
"TABLE2"."table1Fk" = "TABLE1"."table1Id" and
"TABLE3"."table2Fk" = "TABLE2"."table2Id" and
"TABLE4"."table3Fk" = "TABLE3"."table3Id" and
"TABLE5"."table4Fk" = "TABLE4"."table4Id" and
"TABLE6"."table5Fk" = "TABLE5"."table5Id" and
"TABLE7"."table6Fk" = "TABLE6"."table6Id" and
"FACT_TABLE"."table7Fk" = "TABLE7"."table7Id"
group by "TABLE1"."column0"
order by "TABLE1"."column0" ASC
FACT_TABLE has 1.346.000 rows aprox.
TABLE7 has 895 rows aprox.
TABLE6 has 445 rows.
TABLE5 has 183 rows.
TABLE4 has 258 rows.
TABLE3 = TABLE4.
TABLE2 has 126 rows.
TABLE1 has 29 rows.
The query is taking 2.00 seg aprox over the production enviroment, and i really dont know what to do to improved query performace.
im over a 8GB RAM.
intel xeon L5420 at 2.50GHZ.
MSSQL 2005.
i'll apreciated any help that you can give me
Have a look at the execution plan. It will notify you if it notices any tables that could use a non-clustered index to improve query time.
Link
That link helps a lot with optimizing queries (and learning a ton of other things.)
I would suspect that you don't have indexes on your foreign key fields.
This is not causing your problem but I personally wouldn't want to use a tool that creates implicit instead of explicit joins. How can you trust something that is using a syntax that was replaced 18 years ago with something better?
Oh something else I just noticed, I don't see any aggregates, so why are you grouping by?

How to check Oracle database for long running queries

My application, which uses an Oracle database, is going slow or appears to have stopped completely.
How can find out which queries are most expensive, so I can investigate further?
This one shows SQL that is currently "ACTIVE":-
select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece
/
This shows locks. Sometimes things are going slow, but it's because it is blocked waiting for a lock:
select
object_name,
object_type,
session_id,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request,
block,
ctime -- Time since current mode was granted
from
v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id AND
v$lock.id1 = all_objects.object_id AND
v$lock.sid = v$locked_object.session_id
order by
session_id, ctime desc, object_name
/
This is a good one for finding long operations (e.g. full table scans). If it is because of lots of short operations, nothing will show up.
COLUMN percent FORMAT 999.99
SELECT sid, to_char(start_time,'hh24:mi:ss') stime,
message,( sofar/totalwork)* 100 percent
FROM v$session_longops
WHERE sofar/totalwork < 1
/
Try this, it will give you queries currently running for more than 60 seconds. Note that it prints multiple lines per running query if the SQL has multiple lines. Look at the sid,serial# to see what belongs together.
select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
join v$sqltext_with_newlines q
on s.sql_address = q.address
where status='ACTIVE'
and type <>'BACKGROUND'
and last_call_et> 60
order by sid,serial#,q.piece
v$session_longops
If you look for sofar != totalwork you'll see ones that haven't completed, but the entries aren't removed when the operation completes so you can see a lot of history there too.
Step 1:Execute the query
column username format 'a10'
column osuser format 'a10'
column module format 'a16'
column program_name format 'a20'
column program format 'a20'
column machine format 'a20'
column action format 'a20'
column sid format '9999'
column serial# format '99999'
column spid format '99999'
set linesize 200
set pagesize 30
select
a.sid,a.serial#,a.username,a.osuser,c.start_time,
b.spid,a.status,a.machine,
a.action,a.module,a.program
from
v$session a, v$process b, v$transaction c,
v$sqlarea s
Where
a.paddr = b.addr
and a.saddr = c.ses_addr
and a.sql_address = s.address (+)
and to_date(c.start_time,'mm/dd/yy hh24:mi:ss') <= sysdate - (15/1440) -- running for 15 minutes
order by c.start_time
/
Step 2: desc v$session
Step 3:select sid, serial#,SQL_ADDRESS, status,PREV_SQL_ADDR from v$session where sid='xxxx' //(enter the sid value)
Step 4: select sql_text from v$sqltext where address='XXXXXXXX';
Step 5: select piece, sql_text from v$sqltext where address='XXXXXX' order by piece;
You can use the v$sql_monitor view to find queries that are running longer than 5 seconds. This may only be available in Enterprise versions of Oracle. For example this query will identify slow running queries from my TEST_APP service:
select to_char(sql_exec_start, 'dd-Mon hh24:mi'), (elapsed_time / 1000000) run_time,
cpu_time, sql_id, sql_text
from v$sql_monitor
where service_name = 'TEST_APP'
order by 1 desc;
Note elapsed_time is in microseconds so / 1000000 to get something more readable
You can generate an AWR (automatic workload repository) report from the database.
Run from the SQL*Plus command line:
SQL> #$ORACLE_HOME/rdbms/admin/awrrpt.sql
Read the document related to how to generate & understand an AWR report. It will give a complete view of database performance and resource issues. Once we are familiar with the AWR report it will be helpful to find Top SQL which is consuming resources.
Also, in the 12C EM Express UI we can generate an AWR.
You can check the long-running queries details like % completed and remaining time using the below query:
SELECT SID, SERIAL#, OPNAME, CONTEXT, SOFAR,
TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK != 0
AND SOFAR <> TOTALWORK;
For the complete list of troubleshooting steps, you can check here:Troubleshooting long running sessions
select sq.PARSING_SCHEMA_NAME, sq.LAST_LOAD_TIME, sq.ELAPSED_TIME, sq.ROWS_PROCESSED, ltrim(sq.sql_text), sq.SQL_FULLTEXT
from v$sql sq, v$session se
order by sq.ELAPSED_TIME desc, sq.LAST_LOAD_TIME desc;

Resources