enq: TA - Contention issue in Oracle Database - oracle

When there is a peak load in my application several of requests are failing or slowing down due to the above error in Oracle database. When I try to check the top queries ordered by Buffer Gets, I see the below Oracle related queries being on the top. What exact could be happening here?
select us#, status$, user#, ts#, spare1 from undo$ where ts# = :1 order by us# desc
select us# from undo$ where status$ = :1 and xactsqn < :2 order by us# desc

Related

ORA-02395: exceeded call limit on IO usage and using cursors as alternative

I've a PLSQL query that has MINUS.
select id from small_table where col ='xxx'
MINUS
select id from large_table;
large_table has 139070 rows and small_table has 7459 rows. I'm getting ORA-02395: exceeded call limit on IO usage when executing. I've tried replacing MINUS with not in and not exists. I've read regarding the error and I can't negotiate with DBA to change LOGICAL_READS_PER_CALL. Now, can I use 2 cursors to fetch data from 2 tables and then do MINUS equivalent logic in PLSQL side? Or will I get ORA-02395 even with cursor logic. Or I can rewrite the query itself?
Also, what's the max no of rows that can be fetched with cursor using BULK COLLECT INTO a table OF ***.
You could try this:
select s.id
from small_table s
left join big_table b
on s.id = b.id
where b.id is null
and s.col = 'xxx'
If the solution obvious ( negotiate with your DBA ) is not possible you will need to refactor your query to reduce the number of blocks it scans. Doing so requires an understanding of your data volume and distribution. Is the program part of sofware which runs in a production system ? Your user has a profile resource limitation to avoid you can overcome the system resources. What explanation your dba gave you to deny your request ?
You might use BULK COLLECT INTO with the clause LIMIT. That clause limits the number of rows, however this technique is more resource consuming even. So I don't believe it would work.
Without seeing your whole program is very difficult to provide a workaround to your profile limitation.

oracle job to count active number of connections in Database

I am using oracle DB , now as i am monitoring the performance of oracle DB which is connected to my java application , so rite now i have to monitor the count of active connections in DB at regular intervals lets say after every 30 minutes below is the query which return me the count of active users along with there name and count
select osuser, count(osuser) as active_conn_count
from v$session
group by osuser
order by active_conn_count desc
now please advise how can i make an job in oracle DB itself that will get triggered at every 30 minutes , and capture the above query result that is the count of active connections as per the user and also Oracle DB memory usage at that time
You should remove this question and ask it as two separate questions (see my Comment under your question). You are asking two questions: 1. How to get memory usage; 2. How to schedule the job. Answering only the first question here.
To get memory usage you could use something like this (with the output from my database at this moment):
select case grouping_id(nm) when 1 then 'total' else nm end as nm,
round(sum(val/1024/1024)) mb
from (
select 'sga' nm, sum(value) val
from v$sga
union all
select 'pga', sum(value)
from v$sysstat
where name = 'session pga memory'
)
group by rollup(nm)
;
NM MB
----- ----
pga 49
sga 4896
total 4945

Oracle SQL query improves performance on second and third execution

We are analyzing sql statements on an Oracle 12c database. We noticed that the following statement improved by running several times. How can it be explained that it improves by executing it a second and third time?
SELECT COUNT (*)
FROM asset
WHERE ( ( (status NOT IN ( 'x1', 'x2', 'x3'))
AND ( (siteid = 'xxx')))
AND (EXISTS
(SELECT siteid
FROM siteauth a, groupuser b
WHERE a.groupname = b.groupname
AND b.userid = 'xxx'
AND a.siteid = asset.siteid)))
AND ( (assetnum LIKE '5%'));
First run: 24 Sec.
Second run: 17 Sec.
Third run: 7 Sec.
Fourth run:7 Sec.
Tuned by using result cash: 0,003 Sec.
Oracle does not cache results of queries by default, but caches data blocks used by the query. Also 12c has features like "Adaptive execution plans" and "Cardinality feedback" which might enforce execution plan changes between executions even if table statistics were not re-calculated.
Oracle fetches data from disc into memory. The second time you run the query the data is found in memory so no disc reads are necessary. Resulting in faster query execution.
The database is "warmed up".

Oracle performance drop in massive table copy

Situation:
I'm using Oracle 11g R2 to work on two database users.
User U0 = original database with hundreds of tables
User U1 = copy of U0 to be used for simulating U0
To maintain U1, I run a script as below on U1 when simulation starts.
truncate table T1;
truncate table T2;
...
insert into T1 (select * from U0.T1)
insert into T2 (select * from U0.T2)
...
Problem: It had no problem for few days, but got slower after weeks.
Also it sometimes stop inserting records in tables, and in this case it always stops at same table. However I don't think that the table size is the problem since it has less than 20,000 records only.
I guess that this is due to resource problem in DBMS side, disk or memory, but have no idea how to resolve it. I could find a similar question as below without exact procedure to work around storage problem. Maybe this can be simple one to DBAs but unfortunately I'm not qualified for that.
Oracle performance issue with massive inserts and truncates (AWR attached)
Edit: following Jon Heller's comment, I've got query results as follows.
dba_resumable : no records.
gv$sql : 5~6 records queried but insert statement is not included.
Most notable one is "select TIME_WAITED_MICRO from V$SYSTEM_EVENT where event = 'Shared IO Pool Memory'". I guess it is due to insufficient memory.
report_sql_monitor : every sql_id returns "SQL Monitoring Report" without additional information.
Edit2: Please forget about edit above. Insert statement appeared in gv$sql query and SQL monitor result is as attached picture.
Edit3: This time SQL monitor returned Activity detail for the same insert statement.

How to keep cursors in v$sql_plan alive longer

I'm trying to analyse a query execution plan in my Oracle database. I have set
alter system set statistics_level = all;
Such that I can compare estimated cardinalities and times with actual cardinalities and times. Now, I'm running this statement in order to display that information.
select * from table(dbms_xplan.display_cursor(
sql_id => '6dt9vvx9gmd1x',
cursor_child_no => 2,
FORMAT => 'ALLSTATS LAST'));
But I keep getting this message
NOTE: cannot fetch plan for SQL_ID: 6dt9vvx9gmd1x, CHILD_NUMBER: 2
Please verify value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in cursor cache (check
v$sql_plan)
The CHILD_NUMBER was correct when the query was being executed. Also, when I run dbms_xplan.display_cursor at the same time as the query, I get the actual plan. But my JDBC connection closes the PreparedStatement immediately after execution, so maybe that's why the execution plan disappears from v$sql_plan.
Am I getting something wrong, or how can I analyse estimated/actual values after execution?
You could always pin the cursor, which is new in 11g -
dbms_shared_pool.keep ('[address, hash_value from v$open_cursor]', 'C');
Increase the shared_pool to create more caching space for the cursors.
If in 11g, capture the sql plan in the baselines using optimizer_capture_sql_plan_baselines. This stores the plans in dba_sql_plan_baselines.

Resources