How to optimise insert into..select Query? - oracle

I have a below insert query:
Insert into MAP_REL_ATTRIBUTE_PHYID
(MAP_RELNAME, MAP_ATTRNAME,MAP_ATTRVALUE,Source_PHYID,MAP_REL_PHYID)
Select a.map_relname,a.MAP_ATTRNAME,a.MAP_ATTRVALUE,a.id,b.ID
from key_attribute a ,
target_attribute b,
map_rel_phyid c
where a.id = c.Source_phyid
and b.id=c.map_rel_phyid
and a.map_relname = 'Connected By'
and a.map_attrname= b.attr_name
and dbms_lob.compare(a.MAP_ATTRVALUE,b.ATTR_VALUE)=0
For DDL and sample data please refer : Check here
This select query returns around 20 million records and thus taking infinite time to insert the data's into the table. I'm trying to optimise this query. I'm new to oracle. Based on the suggestions I found there can be two possibilities to optimise this:
Creating Indexes in which I'm not sure on which columns to index and what kind of indexes I should create.
Using Bulk Processing with BULK COLLECT and FORALL.
I don't know whether the above solutions are correct. Can somebody please suggest me on this? And let me know if there are any other way to improve the performance.

1) Use append hint for insert
2) Do not use any indexes if you select ALL rows from the table
3) Use parallel hints for insert and select (make sure that parallel DML is enabled first)
alter session enable parallel dml;
Insert /*+ APPEND PARALLEL(4) */ into MAP_REL_ATTRIBUTE_PHYID
(MAP_RELNAME, MAP_ATTRNAME,MAP_ATTRVALUE,Source_PHYID,MAP_REL_PHYID)
Select /*+ PARALLEL(4) USE_HASH(a b c) */ a.map_relname,a.MAP_ATTRNAME,a.MAP_ATTRVALUE,a.id,b.ID
from key_attribute a ,
target_attribute b,
map_rel_phyid c
where a.id = c.Source_phyid
and b.id=c.map_rel_phyid
and a.map_relname = 'Connected By'
and a.map_attrname= b.attr_name
and dbms_lob.compare(a.MAP_ATTRVALUE,b.ATTR_VALUE)=0;

Related

How to determine if an Index is required for my Oracle query

i would like to know if an index is required or would help to run the below query? i dont have any idea how can i analyze this question.
if some one can help please thanks
WITH C(A0_ID, A1_ID, A1_Col0)
AS (
SELECT
Table_1.ID AS A0_ID,
Table_2.ID AS A1_ID,
Table_2.Col0 AS A1_Col0
FROM Table_1 ,Table_2
WHERE Table_2.ID = Table_1.ID
AND Table_1.col1 = ?
AND BITAND(Table_1.col2, ?) <> ?
AND Table_2.col3 IN (?,?,?)
), T(A0_ID, A1_ID, A1_Col0) AS (
SELECT
A0_ID,
A1_ID,
A1_Col0
FROM C
WHERE A1_ID = ?
UNION ALL
SELECT
C.A0_ID,
C.A1_ID,
C.A1_Col0
from C
INNER JOIN T P ON C.A1_Col0 = P.A1_ID
) SELECT A0_ID, A1_ID, A1_Col0 FROM T
The main query selects from T with no post-processing (filtering, aggregation, sorting, etc.), so it doesn't require optimization.
T is a recursive CTE based on the subquery C. Therefore, T doesn't need optimization (unless you materialized it, but that's a different story).
Now, C can be optimized:
I would consider Table_1 as the driving table since it has an equality in the filtering criteria. It also, uses ID to join against Table_2. Therefore a good index for it is:
create index ix1 on Table_1 (col1, ID);
Then, to access Table_2 you'll need to get through ID that should be the main index column. You may add col3 to the index to somewhat improve the performance of the query; only a benchmark will tell if this is a wise idea. The index could look like:
create index ix2 on Table_2 (ID, col3); -- col3 is optional here
I would recommend you create these indexes and compare the performance that each option produces.

Oracle: function based index using dynamic values

I have one complex SQL queries. One of the simple part of the queries looks like:
Query 1:
SELECT *
FROM table1 t1, table2 t2
WHERE t1.number = t2.number
AND UPPER(t1.name) = UPPER(t2.name)
AND t1.prefix = p_in_prefix;
Query 2:
SELECT *
FROM table1 t1, table2 t2
WHERE t1.number = t2.number
AND UPPER(t1.name) = UPPER(p_in_prefix || t2.name)
AND t1.prefix = p_in_prefix;
I have function based index on table1 as (number, UPPER(name)). I have function based index on my table2 as (number, UPPER(NAME)). p_in_prefix is a input parameter (basically a number).
Because of these indexes my Query 1 runs efficiently. But Query 2 has a performance issue, as in Query 2, 't2.name' is prefixed with p_in_prefix.
I can not create function based index for Query 2 because p_in_prefix is a input parameter and I don't know while creating index, what values it might hold. How to resolve performace issue in this scenario? Any hint/idea would be appreciated. If you require more information, please let me know.
Thanks.
Use AND UPPER(t1.name) = UPPER(p_in_prefix) || UPPER(t2.name).
As you have a function based index as UPPER(NAME) of table2, you should have an operand with the same expression in the query in order to make use of the function based index.
Using UPPER(p_in_prefix || t2.name) will not use the function based index as this does not match the function expression UPPER(NAME). Note here that using UPPER(t2.name) does not cause any problems as t2 is just a column alias.
Along with this, you can also pass an optimizer hint in your query in order to instruct the optimizer to use the index.
For more information read "Oracle Database 11g SQL" by Jason Price.
Also read Oracle Docs here and here and for optimizer hints here.

prevent full table scan

I have following query:
select id,
c1,
c2,
c3
from tbl t1
join
(select id
from tbl t2
where upper(replace(c5, ' ', '')) like upper(?)
) j
on j.id = t1.id
? is some wildcard parameter string like %test%.
c5 column has index on the function used to access it:
create index tbl_c5_idx on tbl(upper(replace(c5, ' ', '')))
When I run just inner query it uses tbl_c5_idx, however when I run the whole query it turns into full table scan which is much slower.
Are there any way to avoid full table scans? Hints or rewrite join condition. I can not rewrite whole query as inner query is constructed dynamically depending on the input conditions.
A very basic example to test your functionality
create table test(id number,value varchar2(200));
insert into test values(1,'gaurav is bad guy');
insert into test values(2,'gaurav is good guy');
SELECT *
FROM test
WHERE UPPER (REPLACE (VALUE, ' ', '')) LIKE UPPER ('%gauravisbad%');
before creating index this is doing a full table scan for obvious reason ,because no index get created.
create index tbl_c5_idx on test(upper(replace(value, ' ', '')));
The reason why i am asking you to avoid inner join on the same table because you're using the table twice once to get your records from your filter condition where your index are used and then join on the basis of id which is preventing of using index ,because you dont have index on id column,this can be done with a simple filter condition.
Please let me know if you're again finding out the same issue of full table scan ,or you're not getting the same result from this query .
if you're running the subquery only, it doesn't use the id column in the filters the way the parent query does, therefore the index can be used. In the parent query you are using the id as well, which prevents the index from being used. Maybe adding an index on (id, upper(replace(c5, ' ', ''))) would solve the problem.
Gaurav Soni is right: you don't need a subquery to achieve your goal.
always check performances rather than the explain plan. Performances might just be worst with your hint than without. Oracle is NOT stupid.
Seems I found solution, or at least a thing that helps.
I used index hint, so access is done with tbl_c5_idx.
That is how final query looks now:
select /*+ index(t1) */ id,
c1,
c2,
c3
from tbl t1
join
(select id
from tbl t2
where upper(replace(c5, ' ', '')) like upper(?)
) j
on j.id = t1.id

Hint oracle to use indexes on the subquery -- Oracle SQl

I have a query as follows
select *
from
( select id,sum(amt) amt from table_t group by id
) t inner join table_v v on (v.id = t.id)
order by t.amt desc;
table_t has no index and has 738,000 rows and table_v has an index on id and has 158,000 rows.
The query currently fetches the results in 10 seconds.
The explain query plan shows a full table scan.. How can I improve the performance here ?
If I add an index on id for table_t will it help. Because I am using it in a subquery ?
If you have an index on (id,amt) you would minimise the work in the group by/summation process (as it could read the index). If both columns are nullable then you may need to add a "where id is not null" so it will use the index. [That's implied by the later join on id, but may not get inferred by the optimizer.]
Next step would be to use a materialized view for the summation, maybe with an index on (amt,id) (which it could use to avoid the sort). But that is refreshed either at a commit or on request or at scheduled intervals. It doesn't help if you need to do this query as part of a transaction.
Both the index and the materialized view would add work to inserts/updates/deletes on the table but save work in this query.

Forcing an index in oracle

I have a query joining lots of fields. For some strange reason the index for one table is not being used at all( I use the index key clearly), instead it is doing a FULL table scan. I would like to force the index. We used to do optimizer hints in sybase. Is there a similar hint available in oracle?
For example, in sybase to join tables a, b, c and use myindex in table a, I would do :
SELECT a.*
FROM a(INDEX myindex),
b,
c
WHERE a.field1 = b.field1
AND b.field1 = c.field1
Question is how do I do this in oracle.
Thanks
Saro
Yes, there is a hint like that in Oracle. It looks something like this:
select /*+ index(a my_index) */ from a

Resources