CONNECT BY NOCYCLE PRIOR 10G Optimiser Mode - oracle

question for today; if the RBO is enabled in 10.2.0.3 and one attempts to use a hierarchical approach; CONNECT BY PRIOR for example, does the optimiser get switched to CBO for execution? I have a large RBO 10GR2 (Don't ask!!), I know the stats are out of date and the query runs like a dog using CONNECT BY.
In v$sqlarea the OPTIMIZER_MODE is RULE. I know using LEFT OUTERS will force RULE to COST.
Any thoughts?

When my Memory is correct, you should be able to force the RBO with:
/*+ RULE */
as optimzier hint.

I managed to figure out that it was not the CONNECT BY forcing the CBO, there was a RANK() over Partition in the SELECT clause causing it!

Related

Is optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation oracle system parameter have impact on sql query performance

Is optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation oracle system parameter have impact on sql query performance.
We have two envt suppose A and B. On A Envt query is running fine but in Envt. B its tacking time. I have compared system parameter and found difference in values in optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation .
SQL plan baselines and the resource manager certainly could have a huge impact on performance, and you should use the below two queries or confirm or deny that those parameters are related to your problem.
GV$SQL stores which SQL plan baseline is associated with each SQL statement. Compare the SQL_PLAN_BASELINE column in the below query, and if they are equal then your problem is not related to baselines:
select sql_plan_baseline, round(elapsed_time/1000000) elapsed_seconds, gv$sql.*
from gv$sql
order by elapsed_time desc;
The Active Session History (ASH) views can tell you if the resource manager is an issue. If your queries are being throttled then you will see an event
named "resmgr:cpu quantum" in the below query. (But pay attention to the counts - don't troubleshoot a wait event if it only happens a small number of times.)
select nvl(event, 'CPU') event, count(*)
from gv$active_session_history
group by event
order by count(*) desc;
Resource manager can have other potentially negative affects. If you're in a data warehouse, and using parallel queries, it's possible that resource manager has downgraded the queries on one system. If you're using parallel queries, try comparing the SQL monitoring reports from both systems:
select dbms_sqltune.report_sql_monitor(sql_id => '&YOUR_SQL_ID') from dual;
However, I have a feeling that you're using the wrong approach for your problem. There are generally two approaches to Oracle database performance - database tuning and query tuning. If you're only interested in a single query, then you should probably focus on things like the execution plan and the wait events for the operations of that specific query.

Is it possible to skip the particular SQL statement from auto sql tuning advisor job

I am facing an ORA:7445 issue with auto sql tuning advisor. Auto sql tuning advisor keeps failing with
ORA:7445
while it's tries to tune a particular SQL.
Is there any way to skip this sql statement from auto sql tuning advisor job?
The simplest way to avoid the Automatic SQL Tuning Advisor may be to convert the query into a form that is not supported by the program.
According to the "Automatic SQL Tuning" chapter of the "Database Performance Tuning Guide":
The database ignores recursive SQL and statements that have been tuned
recently (in the last month), parallel queries, DML, DDL, and SQL
statements with performance problems caused by concurrency issues.
If the query select * from dba_objects was causing problems, try re-writing it like this:
select * from dba_objects
union all
--This query block only exists to avoid the Automatic SQL Tuning Advisor.
select /*+ parallel(dba_objects 2) */ * from dba_objects
where 1=0;
It is now a "parallel query" although it will not truly run in parallel because of the 1=0. I haven't tested this, and I imagine it will be difficult for you to test, because you'll need to flush the existing AWR data to prevent the errors.
This is one of the reasons why I usually disable the Automatic SQL Tuning Advisor. I like the idea of it, but in practice I've literally never seen the tuning advisor provide useful information. All it has ever done for me is generate alerts.
In theory,the package DBMS_AUTO_SQLTUNE contains the parameters BASIC_FILTER, OBJECT_FILTER, and PLAN_FILTER. I assume one of those could be useful but I don't think they are implemented yet. I can't find any references to them on Google or My Oracle Support. And when I entered random text for the values there were no errors.
Ideally we would look up every ORA-00600 and ORA-07445 error, create an SR, and fix the underlying problem. But who has time for that? when you encounter a database "bug", the best solution is usually to avoid it as quickly as possible.

Do SQL Server andOracle discard unnecessary clauses in a query?

In practice, are the following SQL queries equally efficient?
select * from aTable where myColumn LIKE '%'
select * from aTable
I known that in principle the first clause implies a table/index search whereas the second doesn't, but I am wondering whether SQL Server and Oracle these days (SS 2008 onwards, and Oracle 10g onwards) are intelligent enough to discard or ignore the redundant WHERE clause.
The reason I ask is that I have an application to look at that dynamically generates sql queries, and it is inserting clauses such myColumn LIKE '%' and I wonder if I should spend scarce time worrying about that.

Same query, slow on Oracle 9, fast on Oracle 10

We have a table called t_reading, with the following schema:
MEAS_ASS_ID NUMBER(12,0)
READ_DATE DATE
READ_TIME VARCHAR2(5 BYTE)
NUMERIC_VAL NUMBER
CHANGE_REASON VARCHAR2(240 BYTE)
OLD_IND NUMBER(1,0)
This table is indexed as follows:
CREATE INDEX RED_X4 ON T_READING
(
"OLD_IND",
"READ_DATE" DESC,
"MEAS_ASS_ID",
"READ_TIME"
)
This exact table (with the same data) exists on two servers, the only difference is the Oracle version installed on each one.
The query in question is:
SELECT * FROM t_reading WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy');
This query executes in less than a second on Oracle 10, and around a minute in Oracle 9.
Are we missing something?
EDIT:
Execution plan for Oracle 9:
Execution plan for Oracle 10:
"Are we missing something?"
Almost certainly, but it's difficult for us to tell you what.
There were some performance improvements in the CBO from 9i to 10g but it's unlikely to make that much difference. So it must be some variation in your systems, which is obviously the hardest thing for us to diagnose, blind and remote as we are.
So, first things to rule out are general system diffences - disk speeds, i/o bottlenecks, memory sizing, etc. You say you have two servers, do they have different specs? Whilst it will require assistence from an sysadmin type to investigate these things, we can discount them with a single question: is it just this query, or can you reproduce this effect with many different queries?
If is just the query, there are at least three possible explanations.
One is data distribution. How was the data populated in the two databases? If the 10g was exported from the 9i database was it sorted in some fashion? Even if it wasn't it is possible that the ETL process has compacted and organised the data and built the fresh indexes in a way which improves the access times.
Another is statistics. Are the 10g statistics fresh and realistic, whilst the 9i statistics are stale and misleading?
A third possibility is a stored execution plan. (You have posted a query with literals, this only applies to queries with bind variables.) Searches on date ranges are notoriously hard to tune. A date range of to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy') suits one sort of plan, whereas date range of to_date('01/01/2010', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy') may well suit a different approach. If the extant plan on the 9i database suits a broader range then queries for a narrow range may take a long time.
While I've been typing this you have published the explain plans. The killer detail is at the bottom of the 9i plan:
Note: rule-based optimization
You haven't got any stats for the table or the index, so the optimizer is applying the dumb defaults of the RBO. You should really address this, but it's not a simple task. You may need to gather stats for all your tables. You may need to change the OPTIMIZER_MODE in the init.ora file. You may need to undertake a regression test of all the queries on your database. So, it's not something you shoudl do lightly.
In the meantime, if this query is bugging you, you'll need to wrnagle the Rule-Based Optimizer the old-fashioned way. Find out more.
A couple of potential explanations:
You're range scanning different indexes.
Assuming that you've got the same index on your 10g table but you've
just called it a different thing the explain plans are different.
The main worry I would have is the lack of information in the rows, bytes and cost column of the explain plan on your 9i query. Oracle 9i does not collect statistics by default and this detail would indicate that you have not collected statistics on this table. Use dbms_stats to gather statistics on your table and the indexes. Specifically the procedure gather_table_stats:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS (
ownname => user,
tabname => 'T_READING',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
method_opt => 'FOR ALL INDEXED COLUMNS',
cascade => TRUE, -- gather index statistics
);
END:
There are plenty of other options if you're interested. Assuming the indexes are different this may help the CBO (assuming it's "turned on") to pick the correct index.
The other options include what server they are on and what the database parameters are. If they're on different servers then the relative "power", disk-speed, I/O and a never-ending list of other options could easily cause a difference. If the database parameters are different then you have the same problem.
Database tuning is an art as much as a science. Oracle has a whole book on it and there are plenty of other resources out there.
A few observations:
your index is a DESCENDING index. This is a function based index, as such, it won't work as expected under the RULE optimizer.
your 9i plan shows access only on OLD_IND, your 10g plan (you cut off the important predicate bits) shows a range scan + inlist iterator, so depending on that RED_PK, it may be accessing on MEAS_ASS_ID which is perhaps more selective.
in terms of indexing too, to answer your query WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN ie OLD_IND equality, MEAS_ASS_ID equality and read_date range scanned, a better index is (OLD_IND , MEAS_ASS_ID , READ_DATE): do the range scan last to cut down on IO.
Have you tried running an explain on the queries on the two servers, the query optimiser for 9i is different from the one for 10g. The 10g query optimiser is much faster and parallelised. Check out the following link Upgrading Query Optimiser
explain SELECT * FROM t_reading WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy');

Securing Oracle distributed transactions against network failures

I am synchronizing a table in a local database with data from a table on a database on the opposite side of the earth using distributed transactions.
The networks are connected through vpn over the internet.
Most of the time it works fine, but when the connection is disrupted during an active transaction, a lock is preventing the job from running again.
I cannot kill the locking session. Trying to do so just returns "ORA-00031: Session marked for kill" and it is not actually killed before i cycle the local database.
The sync job is basically
CURSOR TRANS_CURSOR IS
SELECT COL_A, COL_B, COL_C
FROM REMOTE_MASTERTABLE#MY_LINK
WHERE UPDATED IS NULL;
BEGIN
FOR TRANS IN TRANS_CURSOR LOOP
INSERT INTO LOCAL_MASTERTABLE
(COL_A, COL_B, COL_C)
VALUES
(TRANS.COL_A, TRANS.COL_B, TRANS.COL_C);
INSERT INTO LOCAL_DETAILSTABLE (COL_A, COL_D, COL_E)
SELECT COL_A, COL_D, COL_E
FROM REMOTE_DETAILSTABLE#MY_LINK
WHERE COL_A = TRANS.COL_A;
UPDATE REMOTE_MASTERTABLE#MY_LINK SET UPDATED = 1 WHERE COL_A = TRANS.COL_A;
END LOOP;
END;
Any ideas to make this sync operation more tolerant to network dropouts would be greatly appreciated.
I use Oracle Standard Edition One, so no Enterprise features are available.
TIA
Søren
First off, do you really need to roll your own replication solution? Oracle provides technologies like Streams that are designed to allow you to replicate data changes from one system to another reliably without depending on the database link being always available. That also minimizes the amount of code you have to write and the amount of maintenance you have to perform.
Assuming that your application does need to be configured this way, Oracle will have to use the two-phase commit protocol to ensure that the distributed transaction happens atomically. It sounds like transactions are being left in an in-doubt state. You should be able to see information about in-doubt transactions in the DBA_2PC_PENDING view. You should then be able to manually handle that in-doubt transaction.
You may want to use bulk processing instead of looping. Bulk DML can often give huge performance gains, and if there is a large amount of network lag then the difference may be dramatic if Oracle is retrieving one row at a time. Decreasing the run time won't fix the error, but it should help avoid it. (Although Oracle may already be doing this optimization behind the scenes.)
EDIT
Bulk processing might help, but the best solution would probably be to use only SQL statements. I did some testing and the below version ran about 20 times faster than the original. (Although it's difficult to know how closely my sample data and self-referencing database link model your real data.)
BEGIN
INSERT INTO LOCAL_MASTERTABLE
(COL_A, COL_B, COL_C)
SELECT COL_A, COL_B, COL_C
FROM REMOTE_MASTERTABLE#MY_LINK
WHERE UPDATED IS NULL;
INSERT INTO LOCAL_DETAILSTABLE (COL_A, COL_D, COL_E)
SELECT REMOTE_DETAILSTABLE.COL_A, REMOTE_DETAILSTABLE.COL_D, REMOTE_DETAILSTABLE.COL_E
FROM REMOTE_DETAILSTABLE#MY_LINK
INNER JOIN (SELECT COL_A FROM REMOTE_MASTERTABLE#MY_LINK WHERE UPDATED IS NULL) TRANS
ON REMOTE_DETAILSTABLE.COL_A = TRANS.COL_A;
UPDATE REMOTE_MASTERTABLE#MY_LINK SET UPDATED = 1 WHERE UPDATED IS NULL;
END;
/

Resources