Why my LibraryCache gets locked with parallel queries? - oracle

I am using Oracle 12C, Needed help to understand,
1) I am facing an issue of several sessions in locked state (LibraryCache). Showing me as Parallel queries are running, but I have not set any Parallel clause in DDL of table object.
Only for migration Indexes are created with Parallel clause
But that is only for create Not understand why it is taking for DML also.
2) Also if I assume if that is the case then while running DML from sqleditor its ExecutionPlan shows me noting as parallel.

Finally got an answer through,
http://blog.tanelpoder.com/2007/06/23/a-gotcha-with-parallel-index-builds-parallel-degree-and-query-plans/

Related

DBeaver - Non sequential when executing multiple oracle inserts

I'm using latest DBeaver with Oracle 12
I need to run several inserts to different tables that are connected by foreign key
When executing multiple oracle inserts (Alt + X ) to several tables and it failed on foreign key when it shouldn't (if executed sequentially).
Executing same SQLs in PLSQL developer doesn't produce any error. (reproducible)
It seems that the inserts aren't execute in sequence
Can this behavior changed?
Found DBeaver wiki that warns for unexpected results
NOTE: Be careful with this feature. If you execute a huge script with a large number of queries, it might cause unexpected problems.
Found in disucssions solution to add inserts to PL/SQL block:
ShadelessFox
It's not possible from a DBeaver perspective, but you can use PL/SQL blocks

Update Oracle statistics while query is running to improve performance?

I've got a Oracle Insert query that runs and it has been busy for almost 24 hours.
The SELECT part of the statement had a cost of 211M.
Now I've updated the statistics on the source tables and the cost has came down significantly to 2M.
Should I stop and restart my INSERT statement, or will the new updated statistics automatically have an effect and start speeding up the performance?
I'm using Oracle 11g.
Should I stop and restart my INSERT statement, or will the new updated statistics automatically have an effect and start speeding up the performance?
New statistics will be used the next time Oracle parses them.
So, optimizer cannot update the execution plan based on the stats gathered at run time, since the query is already parsed and the execution plan has already been chosen.
What you can expect from 12c optimizer is, adaptive dynamic execution. It has the ability to adapt the plan at run time based on actual execution statistics. You can read more about it here http://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm

select condition from cdef$ where rowid=:1 query elapsed time is more

In Db trace, there is a query taking long time.Can some one explain what it means.Seems this is very generic oracle query and not involved with my custom tables.
select condition from cdef$ where rowid=:1;
Found the same query in multiple places in trc files(DB trace) and one among all have huge amount of elapsed time. So, what will be the solution to avoid taking such a long time. Am using 11g version oracle.
You're right, that is an example of Oracle's recursive SQL, the statements it runs against the data dictionary to support our application SQL. That particular statement is the query Oracle runs to get the Search Condition of a CHECK constraint. If you are inserting or updating rows in tables with check constraints you will see it a lot.
The actual statement shouldn't take too long to run, so it is unlikely to be the source of a performance problem. Unless you are running lots of insert statements with hard-coded values. Oracle will run that query every time it parses a fresh insert or update statement. That will get expensive if you're not using bind variables.

Is it possible to know what triggers would fire given a query?

I have a database with (too) many triggers. They can cascade.
I have a query, which seems simple, and by no means I can remember the effect of all triggers. So, that simple query might actually be not simple at all and not do what I expect.
Is there a way to know what triggers would fire before running the query, or what triggers have fired after running it (not committed yet)?
I am not really interested in queries like SELECT … FROM user_triggers WHERE … because I know them already, and also because it does not tell me whether the firing conditions of the triggers will be met in my query.
Thanks
"I have a database with (too) many triggers. They can cascade."
This is just one of the reasons why many people anathematize triggers.
"Is there a way to know what triggers would fire before running the
query"
No. Let's consider something which you might find in an UPDATE trigger body:
if :new.sal > :old.sal * 1.2 then
insert into big_pay_rises values (:new.empno, :old.sal, :new.sal, sysdate);
end if;
How could we tell whether the trigger on BIG_PAY_RISES will fire? It might, it might not depending on an algorithm we cannot parse out of the DML statement.
So, the best you can hope for is a recursive search of DBA_TRIGGERS and DBA_DEPENDENCIES to identify all the triggers which might feature in your cascade. But it's going to be impossible to identify which ones will definitely fire in any given scenario.
" or what triggers have fired after running it (not committed yet)?"
As others have pointed out, logging is one option. But if you are using Oracle 11g you have another option: the PL/SQL Hierarchical Profiler. This is a non-intrusive tool which tracks all the PL/SQL program units touched by a PL/SQL call, including triggers. One of the cool features of the Hierarchical Profiler is that it includes PUs which belong in other schemas, which might be useful with cascading triggers.
So, you just need to wrap your SQL in an anonymous block and call it with the Hierarchical Profiler. Then you can filter you report to reveal only the triggers which fired. Find out more .
Is there a way to know what triggers would fire before running the query, or what triggers have fired after running it (not committed yet)?
To address this I would run the query inside an anonymous block using a PL/SQL debugger.
There is no such thing called parse through your query and give u the triggers involved in your query. It is going to be as simple as this. Just pick the table names from the query you are running and for each one just list the triggers using the following query before running the query. Isn't that simple enough?
select trigger_name
, trigger_type
, status
from dba_triggers
where owner = '&owner'
and table_name = '&table'
order by status, trigger_name

Avoid locks in Oracle UPDATE command

If I am trying to acquire a lock in Oracle 10g (e.g. with SELECT...FOR UPDATE), there is a NOWAIT option to get an error when the row is locked, instead of the query just hanging. Is there a way to achive this for a simple UPDATE statement? There is a DDL_LOCK_TIMEOUT option in Oracle 11g, I would need something similar for DML operations (and in 10g).
(Background: I have some unit tests which query the database (which is unfortunately not an isolated test database, but a developement DB used for various things), and I want them to throw an error instantly instead of hanging when anything goes wrong.)
No. There is no way to have a simple UPDATE statement in Oracle time out if some other session has locked the row it is trying to update. You could, of course, code your unit tests to do a SELECT ... FOR UPDATE WAIT <<n>> before doing the UPDATE. That would ensure that by the time you got to the UPDATE, you would be guaranteed to already have the lock.
I'm also a bit confused by the idea that you'd be running unit tests against rows that other sessions are modifying at the same time you are. That would seem to defeat the purpose of having unit tests since it would never be clear whether a test failed because the code did something wrong or because some other session modified the data in an unexpected way during the test.

Resources