Oracle FAQ defines temp table space as follows:
Temporary tablespaces are used to
manage space for database sort
operations and for storing global
temporary tables. For example, if you
join two large tables, and Oracle
cannot do the sort in memory, space
will be allocated in a temporary
tablespace for doing the sort
operation.
That's great, but I need more detail about what exactly is using the space. Due to quirks of the application design most queries do some kind of sorting, so I need to narrow it down to client executable, target table, or SQL statement.
Essentially, I'm looking for clues to tell me more precisely what might be wrong with this (rather large application). Any sort of clue might be useful, so long as it is more precise than "sorting".
I'm not sure exactly what information you have to hand already, but using the following query will point out which program/user/sessions etc are currently using your temp space.
SELECT b.TABLESPACE
, b.segfile#
, b.segblk#
, ROUND ( ( ( b.blocks * p.VALUE ) / 1024 / 1024 ), 2 ) size_mb
, a.SID
, a.serial#
, a.username
, a.osuser
, a.program
, a.status
FROM v$session a
, v$sort_usage b
, v$process c
, v$parameter p
WHERE p.NAME = 'db_block_size'
AND a.saddr = b.session_addr
AND a.paddr = c.addr
ORDER BY b.TABLESPACE
, b.segfile#
, b.segblk#
, b.blocks;
Once you find out which session is doing the damage, then have a look at the SQL being executed, and you should be on the right path.
Thanks goes for Michael OShea for his answer ,
but in case you have Oracle RAC multiple instances , then you will need this ...
SELECT b.TABLESPACE
, b.segfile#
, b.segblk#
, ROUND ( ( ( b.blocks * p.VALUE ) / 1024 / 1024 ), 2 ) size_mb
, a.inst_ID
, a.SID
, a.serial#
, a.username
, a.osuser
, a.program
, a.status
FROM gv$session a
, gv$sort_usage b
, gv$process c
, gv$parameter p
WHERE p.NAME = 'db_block_size'
AND a.saddr = b.session_addr
AND a.paddr = c.addr
-- AND b.TABLESPACE='TEMP2'
ORDER BY a.inst_ID , b.TABLESPACE
, b.segfile#
, b.segblk#
, b.blocks;
and this the script to generate the kill statements:
Please review which sessions you will be killing ...
SELECT b.TABLESPACE, a.username , a.osuser , a.program , a.status ,
'ALTER SYSTEM KILL SESSION '''||a.SID||','||a.SERIAL#||',#'||a.inst_ID||''' IMMEDIATE;'
FROM gv$session a
, gv$sort_usage b
, gv$process c
, gv$parameter p
WHERE p.NAME = 'db_block_size'
AND a.saddr = b.session_addr
AND a.paddr = c.addr
-- AND b.TABLESPACE='TEMP'
ORDER BY a.inst_ID , b.TABLESPACE
, b.segfile#
, b.segblk#
, b.blocks;
One rule of thumb is that almost any query that takes more than a second probably uses some TEMP space, and these are not the just ones involving ORDER BYs but also:
GROUP BYs (SORT GROUPBY before 10.2 and HASH GROUPBY from 10.2 onwards)
HASH JOINs or MERGE JOINs
Global Temp Tables (obviously)
Index rebuilds
Occasionally, used space in temp tablespaces doesn't get released by Oracle (bug/quirk) so you need to manually drop a file from the tablespace, drop it from the file system and create another one.
Related
We have a process that imports jira data into an oracle database for reporting. The issue I am having at the moment is extracting custom fields and converting a row into a column in oracle.
jira custom data view
jira data view
This is how I am extracting the query, the problem here is that the performance just does not scale.
select A.*, (select cf.date_value from v_jira_custom_fields cf where cf.issue_id = a.issue_id and cf.custom_field_name = 'Start Date') Start_Date,
(select cf.number_value from v_jira_custom_fields cf where cf.issue_id = a.issue_id and cf.custom_field_name = 'Story Points') Story_Points,
(select cf.custom_value from v_jira_custom_fields cf where cf.issue_id = a.issue_id and cf.custom_field_name = 'Ready') Ready
from jira_data A
where A.project = 'DAK'
and A.issue_id = 2222
To really understand where the bottleneck is we'd need to get an execution plan and info about indexes that exists, at least.
Assuming you have indexes on issue_id and project in both tables, what I'd try next is to get rid of the 3 separate selects and join your jira_data to pivoted jira_custom_fields
with P as (
select
project
, issue_id
, story_type_s
, impacted_application_s
, impacted_application_c
, story_points_n
, start_date_d
, end_date_d
, ready_c
from v_jira_custom_fields
pivot (
max(string_value) as s
, max(number_value) as n
, max(text_value) as t
, max(date_value) as d
, max(custom_value) as c
for customfield_id in (
1 story_type
, 2 impacted_application
, 3 story_points
, 4 start_date
, 5 end_date
, 6 ready
)
)
)
select
A.*
, P.start_date_d start_date
, P.story_points_n story_points
, P.ready_c ready
from jira_data A
join P on A.project = P.project and A.issue_id = P.issue_id
where A.project = 'DAK'
and A.issue_id = 2222
all. I have a weird situation when executing a native SQL query against an Oracle database. If the query is executed via SQL client software (in my case, DbVisualizer) I have one result set; if my application (Java, Spring-based) executes it, the result is different.
select
c.id
, c.parentId
, c.name
, c.sequence
, c.isSuppressed
, c.isGotoCategory
, c.hasChildren
, c.startDate
, c.endDate
from (
select
category.category_id as id
, category.parent_category_id as parentId
, nvl(context.category_name, category.category_name) as name
, nvl(context.sequence_num, category.sequence) as sequence
, nvl(context.is_suppressed, 'N') as isSuppressed
, decode(category.syndicate_url, null, 'N', 'Y') as isGotoCategory
, decode(category.is_leaf, 1, 'N', 'Y') as hasChildren
, category.start_date as startDate
, category.end_date as endDate
from (
select
category.category_id
, category.parent_category_id
, category.category_name
, category.start_date
, category.end_date
, category.syndicate_url
, category.sequence
, connect_by_isleaf as is_leaf
, level as category_level
from
category
start with
category.category_id = (
select
category_id
from
category
where
parent_category_id is null
start with
category_id = 3485
connect by prior parent_category_id = category_id
)
connect by category.parent_category_id = prior category.category_id and level <= (4 + 1)
) category
inner join category_destination_channel channel on channel.category_id = category.category_id
and channel.publish_flag = 'Y'
and channel.destination_channel_id = 1
left join contextual_category context on context.category_id = category.category_id
and context.context_type = 'DESKTOP'
where
category.category_level <= 4
and category.start_date <= sysdate
and category.end_date >= sysdate
) c
where
c.isSuppressed <> 'Y'
The query above is the one that problematic one. When executed via SQL client, the outer restriction applies (c.isSuppressed <> 'Y') and the records are filtered out. When the query is executed by the application the outer restriction doesn't seem to be applied at all and my result set has records that should not be there.
Anyone has faced this kind of problem before?
My application is built with: Java 7, Spring 4.x, Oracle 11 (with OJDBC driver version 11.2.0.3). Application server is JBOSS EAP 6.3 by my tests are made with Jetty (maven-jetty-plugin 6.1.26).
I already considered some possible causes of the problem - application accessing wrong database, unusual issue while using #SqlResultSetMapping - but ruled them out with some tests. Don't know what to consider anymore.
Any help is appreciated. Thanks in advance.
I need to create a query, that if a certain field is blank or null. I need to do a select statement to another table and retrieve the blank field . Could you please advise on a way to accomplish this. Below is the query. The field in question is BEAT.
SELECT COALESCE(ADDRESSES.BEAT,Incident_addresses.beat)
, COALESCE (ADDRESSES.SUB_BEAT,Incident_addresses.sub_beat)
, ADDRESSES.STREET_NAME
, ADDRESSES.STREET_NUMBER
, ADDRESSES.SUB_NUMBER
, WARRANT_PEOPLE_VW.LNAME
, WARRANT_PEOPLE_VW.FNAME
, WARRANT_PEOPLE_VW.DOB
, WARRANT_PEOPLE_VW.RACE_RACE_CODE
, WARRANT_PEOPLE_VW.SEX_SEX_CODE
, WARRANT_PEOPLE_VW.CASE_NUMBER
, E_WARRANTS.DATE_ISSUED
, E_WARRANTS.TELETYPE_NUMBER
, E_WARRANTS.ORDINANCE_VIOLATION
FROM EJSDBA.ADDRESSES
, POL_LEEAL.E_WARRANTS
, POL_LEEAL.WARRANT_PEOPLE_VW,incident_people,Incident_addresses
WHERE ADDRESSES.ADDRESS_ID =E_WARRANTS.ADDR_ADDRESS_ID
AND E_WARRANTS.WARRANT_ID = WARRANT_PEOPLE_VW.WARRANT_ID
AND WARRANT_PEOPLE_VW.NME_TYP_NAME_TYPE_CODE = 'P'
AND WARRANT_PEOPLE_VW.AGNCY_CD_AGENCY_CODE = 'MCPD'
AND WARRANT_PEOPLE_VW.WSC_CODE='A'
AND EJSDBA.ADDRESSES.ADDRESS_ID= Incident_addresses.ADDRESS_ID
and incident_people.inc_incident_id=Incident_addresses.incident_id
ORDER BY ADDRESSES.BEAT
, ADDRESSES.SUB_BEAT
, ADDRESSES.STREET_NAME
, ADDRESSES.STREET_NUMBER
;
You can embed a correlated subquery into a case expression, but you MUST reference the table inside the subquery to some values of the outer query so that the correct value can be located.
SELECT
COALESCE(ADDRESSES.BEAT, Incident_addresses.beat)
, CASE
WHEN Addresses.BEAT IS NULL THEN (
SELECT
Beat
FROM incidents inner_ref
WHERE ???outer??.incident_id = inner_ref.id
AND rownum = 1)
END AS x
, COALESCE(ADDRESSES.SUB_BEAT, Incident_addresses.sub_beat)
...
and that subquery MUST only return a single value (hence I have used "and rownum = 1".
(In Oracle 12 you could use FETCH FIRST 1 ROW ONLY)
i have created a query that is sued to display a data in a label. This particular query will then be stored into a program that we use. The query runs just fine until this morning when it returns the error ORA-30928: "Connect by filtering phase runs out of temp tablespace". I have Googled and found out that I can do any of the following:
Include a NO FILTERING hint - but did not work properly
Increase the temp tablespace - not applicable to me since this runs in a production server that I don't have any access to.
Are there other ways to fix this? By the way, below is the query that I use.
SELECT * FROM(
SELECT
gn.wipdatavalue
, gn.containername
, gn.l
, gn.q
, gn.d
, gn.l2
, gn.q2
, gn.d2
, gn.l3
, gn.q3
, gn.d3
, gn.old
, gn.qtyperbox
, gn.productname
, gn.slot
, gn.dt
, gn.ws_green
, gn.ws_pnr
, gn.ws_pcn
, intn.mkt_number dsn
, gn.low_number
, gn.high_number
, gn.msl
, gn.baketime
, gn.exptime
, NVL(gn.q, 0) + NVL(gn.q2, 0) + NVL(gn.q3, 0) AS qtybox
, row_number () over (partition by slot order by low_number) as n
FROM
(
SELECT
tr.*
, TO_NUMBER(SUBSTR(wipdatavalue, 1, INSTR (wipdatavalue || '-', '-') - 1)) AS low_number
, TO_NUMBER(SUBSTR(wipdatavalue, 1 + INSTR ( wipdatavalue, '-'))) AS high_number
, pm.msllevel MSL
, pm.baketime BAKETIME
, pm.expstime EXPTIME
FROM trprinting tr
JOIN CONTAINER c ON tr.containername = c.containername
JOIN a_lotattributes ala ON c.containerid = ala.containerid
JOIN product p ON c.productid = p.productid
LEFT JOIN otherdb.pkg_main pm ON trim(p.brandname) = trim(pm.pcode)
WHERE (c.containername = :lot OR tr.SLOT= :lot)
)gn
LEFT JOIN otherdb.intnr intn ON TRIM(gn.productname) = TRIM(intn.part_number)
connect by level <= HIGH_NUMBER + 1 - LOW_NUMBER and LOW_NUMBER = prior LOW_NUMBER and prior SYS_GUID() is not null
ORDER BY low_number,n
)
WHERE n LIKE :n AND wipdatavalue LIKE :wip AND ROWNUM <= 300 AND wipdatavalue NOT LIKE 0
I am using Oracle 11g too.
Thanks for the help everyone.
I have oracle 10g installed on windows server 2003.
I have 22,000,000 records in single table and this is a transactional table,
increasing of records in same table approx. 50,000 per month.
My question is that whenever I run query on it always my query too slow.
Is there any method by which I can improve the performance of the query, like partitioning the table or else?
the query is
select a.prd_code
, a.br_code||'-'||br_title
, a.size_code||'-'||size_title
,size_in_gms
, a.var_code||'-'||var_title
, a.form_code||'-'||form_title
, a.pack_code||'-'||pack_title
, a.pack_type_code||'-'||pack_type_title
, start_date
, end_date
, a.price
from prices a
, brand br
, (select distinct prd_code
, br_code
, size_code
, var_code
, form_code
,packing_code
, pack_type_code
from cphistory
where prd_code = '01'
and flag = 'Y'
and project_yy = '2009' and '01' and '10') cp
, (select prd_code
, br_code
, size_code
, size_in_gms
from sizes
where prd_code = '01'
and end_date = '31-dec-2050'
and flag = 'Y') sz
, (select prd_code
, br_code
, var_code
, var_title
from varient) vt
, (select prd_code
, br_code
, form_code
, form_title
from form) fm
, (select prd_code
, pack_title
from package) pc
, (select prd_code
, pack_type_title
from pakck_type) pt
where a.prd_code = br.prd_code
and a.br_code = br_br_code
and a.prd_code = sz.prd_code
and a.br_code = sz.br_code
and a.size_code = sz.size_code
and a.prd_code = vt.prd_code
and a.br_code = vt.br_code
and a.var_code = vt.var_code
and a.prd_code = fm.prd_code
and a.br_code = fm.br_code
and a.form_code = fm.form_code
and a.prd_code = pc.prd_code
and a.br_code = pc.br_code
and a.pack_code = pc.pack_code
and a.prd_code = pt.prd_code
and a.pack_type_code = pt.pack_type_code
and end_date = '2009'
and prd_code = '01'
order by a.prd_code
, a.br_code
, a.size_code
, a.var_code
, a.pack_code
, a.form_code
tables used in this query are:
prices : has more than 2.1M rows
cphistory : has more than 2.2M rows
sizes : has more than 5000 rows
brand : has more than 1200 rows
varient : has more than 1800 rows
package : has more than 200 rows
pack_type : has more than 150 rows
Check indexes. Make sure you have a primary key. Alternate candidate keys should have unique constraints and indexes.
Run EXPLAIN PLAN on queries and see how the optimizer is running them. If you see TABLE SCAN, add indexes.
Make sure the optimizer is using statistics to make its job easier.
Move historical data into warehouses if you must.
22M records isn't that enormous.
You should probably start with explain plans, but do this:
Take each of the queries in the
"FROM" clause out and do an explain
plan on each one. Verify that each
one is hitting an appropriate index.
If not then add indexes for each of
them so that each of the sub-queries
is fast.
(only if lots of data is returned )
Take out the order by from the main
query and run it. See if it is a
lot faster. If that is the case then
your time is spent sorting the data
and you need to look into why you
are having a slow sort.
Pull out the sub-queries. "vt",
"fm", "pc" and "pt" are taking the
entire tables in the sub queries.
When I test with this putting in
sub-queries like this causes 10g to
miss the indexes on the table
completely. Just put the tables
into the from and the the oracle
optimizer use indexes.
Try folding in all the criteria on
"cp" and "sz" into the main query
and remove the sub-queries and see
if that makes a difference.
Lots and lots of explain plans and more than a little careful though. I wish that I could help more.
Why are the queries slow? Are they doing table scans on the large table? Normally, OLTP queries would be fetching a relatively small number of rows based on a primary key or other indexed column. If your queries are not using indexes and they are the typical sort of OLTP queries that would benefit from using indexes, that would be the place to start.
If you regularly need to query a large number of rows from this table, such that a table scan would be the more efficient access path, you could look into either using materialized views to pre-aggregate the data or into partitioning the table. Partitioning, however, is an extra cost option on top of your enterprise edition license, so you'll generally want to exhaust your other options before going down that path.