I have a query like this:
SELECT * FROM TEST1 LEFT OUTER JOIN TEST2 on TEST1.ID=TEST2.ID
UNION ALL
SELECT * FROM TEST3 LEFT OUTER JOIN TEST4 on TEST3.ID=TEST4.ID;
The behavior I see here is, it first join TEST1 and TEST2 tables (billions of rows) and then stores the output in temp tablespace. Then it joins TEST3 and TEST4 and then saves the output in same temp table. And finally select the records from there to display the result.
This behavior I see in both Redshift and Oracle. I was just wondering why it stores the result in temporary segments after getting result from first SELECT. It's time taking as well as eats up the temp space. Can not it just starts displaying the result after 1st SELECT is finishes and then goes for 2nd one (instead of storing).
This answer is somewhat speculative, because I don't have an Oracle doc reference. By inspection, we can imagine instead that you wanted to run the following query:
SELECT * FROM TEST1 JOIN TEST2
UNION ALL
SELECT * FROM TEST3 JOIN TEST4
ORDER BY some_col;
It should be clear that to apply any set operation like ORDER BY, all the records returned from the union query would need to be in one logical place. A temp table would seem to work.
That you are not using ORDER BY appears to not affect the workflow which Oracle is using.
I can also add another reason why Oracle is insisting on using a temp table here. Suppose it would be possible to write both halves of the union directly to the buffer. But what would happen if, at a later date, the size of the total union query suddenly exceeded what the buffer can hold? The answer is that your database would crash. So, using a temp table is a safe bet which should generally always work.
How do you observe this behaviour? By any chance don't you perform INSERT or CREATE TABLE? That would explain your observation, because at the end, all rows are required.
Also if your client has set an option fetch all rows this could be observed.
But in normal case, where the client is interested in few first rows Oracle returns quickly the first available (array size) rows from the first join ignoring the second one.
You may perform this little Gedankenexperiment:
create table test1 as
select rownum id,
lpad('x',1023,'X') pad
from dual connect by level <= 1000000;
Create analog the table 2 to 4.
Now run your query (adapted to valid syntax)
SELECT * FROM TEST1 CROSS JOIN TEST2
UNION ALL
SELECT * FROM TEST3 CROSS JOIN TEST4;
This returns for my the first page in SQL Developer in ca 30 seconds, which somehow disproves your claim.
Simple calculate the required TEMP space for two 10**6 * 10**6 cartesian join with row lenth 1K - this is far above my TEMP configuration.
The one possible way to observe what is Oracle actualy doing is to run the query with the /*+ gather_plan_statistics */ hint.
Than get the SQL_ID of the statement and check the actual rows A-Rowsin the plan
select * from table(dbms_xplan.display_cursor('a9y62gxagups6',null,'ALLSTATS LAST'));
SQL_ID a9y62gxagups6, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM TEST1 CROSS JOIN TEST2
UNION ALL SELECT * FROM TEST3 CROSS JOIN TEST4
Plan hash value: 1763392637
--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 50 |00:00:28.52 | 166K| 166K| 142K| | | |
| 1 | UNION-ALL | | 1 | | 50 |00:00:28.52 | 166K| 166K| 142K| | | |
| 2 | MERGE JOIN CARTESIAN| | 1 | 1000G| 50 |00:00:28.52 | 166K| 166K| 142K| | | |
| 3 | TABLE ACCESS FULL | TEST1 | 1 | 1000K| 1 |00:00:00.02 | 4 | 28 | 0 | | | |
| 4 | BUFFER SORT | | 1 | 1000K| 50 |00:00:28.49 | 166K| 166K| 142K| 1255M| 11M| 97M (0)|
| 5 | TABLE ACCESS FULL | TEST2 | 1 | 1000K| 1000K|00:00:03.66 | 166K| 166K| 0 | | | |
| 6 | MERGE JOIN CARTESIAN| | 0 | 1000G| 0 |00:00:00.01 | 0 | 0 | 0 | | | |
| 7 | TABLE ACCESS FULL | TEST3 | 0 | 1000K| 0 |00:00:00.01 | 0 | 0 | 0 | | | |
| 8 | BUFFER SORT | | 0 | 1000K| 0 |00:00:00.01 | 0 | 0 | 0 | 1103M| 10M| |
| 9 | TABLE ACCESS FULL | TEST4 | 0 | 1000K| 0 |00:00:00.01 | 0 | 0 | 0 | | | |
--------------------------------------------------------------------------------------------------------------------------------------
You see, that Oracle
1) full scanned the table2 (row 5)
2) get one row from table1 (row 3)
3) to return to frist 50 rows (row 0)
4) tables 3 and 4 are untached (rows 7 and 9)
You may simple adapt the example to you inner join to see similar results.
Related
I'm trying to get some precise row counts for all tables, given that some have deleted rows. I have been using sys.storage.count. But this seems to count the deleted ones also.
I assume using sys.storage would be simpler and faster than looping through count(*) queries, though both strategies may be fine in practice.
Maybe there is some column that counts modifications so I could just subtract the two counts?
If all you need to know is the number of actual rows in a table, I'd recommend just using a count(*) query. It's very fast. Even if you have N tables, it's easy to do a count(*) for each table.
sys.storage gives you information from the raw storage. With that, you can get pretty low-level information, but it has some edges. sys.storage.count returns the count in the storage, hence, indeed, it includes the delete rows since they are not actually deleted. As of Jul2021 version of MonetDB, deleted rows are automatically overwritten by new inserts (i.e. auto-vacuuming). So, to get the actual row count, you need to look up the 'deletes' from sys.deltas('<schema>', '<table>'). For instance:
sql>create table tbl (id int, city string);
operation successful
sql>insert into tbl values (1, 'London'), (2, 'Paris'), (3, 'Barcelona');
3 affected rows
sql>select * from tbl;
+------+-----------+
| id | city |
+======+===========+
| 1 | London |
| 2 | Paris |
| 3 | Barcelona |
+------+-----------+
3 tuples
sql>select schema, table, column, count from sys.storage where table='tbl';
+--------+-------+--------+-------+
| schema | table | column | count |
+========+=======+========+=======+
| sys | tbl | city | 3 |
| sys | tbl | id | 3 |
+--------+-------+--------+-------+
2 tuples
sql>select id, deletes from sys.deltas ('sys', 'tbl');
+-------+---------+
| id | deletes |
+=======+=========+
| 15569 | 0 |
| 15570 | 0 |
+-------+---------+
2 tuples
After we delete one row, the actual row count is sys.storage.count - sys.deltas ('sys', 'tbl').deletes:
sql>delete from tbl where id = 2;
1 affected row
sql>select * from tbl;
+------+-----------+
| id | city |
+======+===========+
| 1 | London |
| 3 | Barcelona |
+------+-----------+
2 tuples
sql>select schema, table, column, count from sys.storage where table='tbl';
+--------+-------+--------+-------+
| schema | table | column | count |
+========+=======+========+=======+
| sys | tbl | city | 3 |
| sys | tbl | id | 3 |
+--------+-------+--------+-------+
2 tuples
sql>select id, deletes from sys.deltas ('sys', 'tbl');
+-------+---------+
| id | deletes |
+=======+=========+
| 15569 | 1 |
| 15570 | 1 |
+-------+---------+
2 tuples
After we insert a new row, the deleted row is overwritten:
sql>insert into tbl values (4, 'Praag');
1 affected row
sql>select * from tbl;
+------+-----------+
| id | city |
+======+===========+
| 1 | London |
| 4 | Praag |
| 3 | Barcelona |
+------+-----------+
3 tuples
sql>select schema, table, column, count from sys.storage where table='tbl';
+--------+-------+--------+-------+
| schema | table | column | count |
+========+=======+========+=======+
| sys | tbl | city | 3 |
| sys | tbl | id | 3 |
+--------+-------+--------+-------+
2 tuples
sql>select id, deletes from sys.deltas ('sys', 'tbl');
+-------+---------+
| id | deletes |
+=======+=========+
| 15569 | 0 |
| 15570 | 0 |
+-------+---------+
2 tuples
So, the formula to compute the actual row count (sys.storage.count - sys.deltas ('sys', 'tbl').deletes) is generally applicable. sys.deltas() keeps stats for every column of a table, but the count and deletes are table wide, so you only need to check one column.
We were facing some issues with execution plans while accessing remote database objects with dblink. Here is the query itself run on the remote database:
select --+ index_desc (d DAY_OPERATIONAL_PK)
d.oper_day
from day_operational d
where rownum = 1
The plan for this query is the following :
Plan Hash Value : 2761870770
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 2 | 00:00:01 |
| * 1 | COUNT STOPKEY | | | | | |
| 2 | INDEX FULL SCAN DESCENDING | DAY_OPERATIONAL_PK | 1 | 8 | 2 | 00:00:01 |
---------------------------------------------------------------------------------------------
This one works correct that is it returns the last operational day. In this case 14.09.2021. However if execute this exact same query from other database connecting to this one via dblink, wrong results are returned . In this case the first row of the table is returned - 05.09.2009.
Here is the query:
select --+ index_desc (d DAY_OPERATIONAL_PK)
d.oper_day
from day_operational#iabs d
where rownum = 1
The plan generated for this query in local database is the following:
Plan Hash Value :
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT REMOTE | | 1 | 8 | 2 | 00:00:01 |
| * 1 | COUNT STOPKEY | | | | | |
| 2 | INDEX FAST FULL SCAN | XPKDAY_OPERATIONAL | 1 | 8 | 2 | 00:00:01 |
---------------------------------------------------------------------------------------
As it can be seen, the plan generated when connected via dblink uses full table scan and ignores index_desc hint. How could we enforce oracle to use this index? Tried adding driving_site hint but it didn't help
Sorry for confusions. It occured that index names in local database and remote database are not same and therefore hint was ignoring since there was no index with that name in the remote db. With right index names, both the plan and result were correct
Consider the problem of applying changes to an aggregate table. Row that exist must be updated while new rows must be inserted. My approach was as follows:
Insert all changes in a temporary table (100K at a time)
MERGE the temporary table into the main table (eventually reaching 100s of millions rows)
The SQL (with a SORT MERGE hint) looks as follows (nothing fancy):
merge /*+ USE_MERGE(t s) */
into F_SCREEN_INSTANCE t
using F_SCREEN_INSTANCE_BUF s
on (s.DAY_ID = t.DAY_ID and s.PARTIAL_ID = t.PARTIAL_ID)
when matched then update set
t.ACTIVE_TIME_SUM = t.ACTIVE_TIME_SUM + s.ACTIVE_TIME_SUM,
t.IDLE_TIME_SUM = t.IDLE_TIME_SUM + s.IDLE_TIME_SUM
when not matched then insert values (
s.DAY_ID, s.PARTIAL_ID, s.ID, s.AGENT_USER_ID, s.COMPUTER_ID, s.RAW_APPLICATION_ID, s.APP_USER_ID, s.APPLICATION_ID, s.USER_ID, s.RAW_MODULE_ID, s.MODULE_ID, s.START_TIME, s.RAW_SCREEN_NAME, s.SCREEN_ID, s.SCREEN_TYPE, s.ACTIVE_TIME_SUM, s.IDLE_TIME_SUM)
The F_SCREEN_INSTANCE table has (DAY_ID, PARTIAL_ID) as a primary key and also is IOT (index organized table). This makes it an ideal candidate for a merge join: the rows are physically sorted by the lookup key.
So far so good. I've started a benchmark and the initial times looked good, 10s for one merge. But after about an hour, the merges were taking about 4 min with heavy tempdb usage (4GB per merge). The query plan below shows that F_SCREEN_INSTANCE is re-sorted before the merge, even though the table is ideally sorted already. And of course, as the table grows even more tempdb will be needed and the whole approach falls apart.
OK, so why re-sort the table? It turns to be a limitation of the merge join implementation: the second table is always sorted.
If an index exists, then the database can avoid sorting the first data
set. However, the database always sorts the second data set,
regardless of indexes.
O...K, so then can I make the main table to be first and the buffer to be second? Nope, that's not possible either. No matter how I list the tables in the USE_MERGE hint, the source table is always first.
Finally, here is my question: Have I missed anything? Is it possible to make this SORT MERGE approach work?
Here are some more details addressing questions you might ask:
What Oracle version? 12c.
Have you tried HASH JOIN? Yes, it's bad, as expected. The main table needs to be scanned in order to build the hash table. It can't scale as F_SCREEN_INSTANCE grows.
Have you tried LOOP JOIN? Yes, it's also bad. Considering the size of the buffer table, 100K lookups into F_SCREEN_INSTANCE take unreasonably long. Merges took about 3 min very quickly.
All in all, the MERGE JOIN is conceptually the best access strategy, but the Oracle implementation seems to be severely crippled by re-sorting the target table.
Sort merge outer joins will always put the outer-joined table second regardless of the hints. Adding an extra inner-join allows control of the join order, and then ROWID can be used to join again to the large table. Hopefully two good joins will work better than one bad join.
Assumptions
This answer assumes that the sort merge join is the fastest join, and that the manual is correct that the second data set is always sorted. It would be difficult to test these assumptions without significantly more information about the data.
Sample Schema
Here are some similar tables, with fake statistics to make the optimizer think they have 500M rows and 100K rows.
create table F_SCREEN_INSTANCE(DAY_ID number, PARTIAL_ID number, ID number, AGENT_USER_ID number,COMPUTER_ID number, RAW_APPLICATION_ID number, APP_USER_ID number, APPLICATION_ID number, USER_ID number, RAW_MODULE_ID number,MODULE_ID number, START_TIME date, RAW_SCREEN_NAME varchar2(100), SCREEN_ID number, SCREEN_TYPE number, ACTIVE_TIME_SUM number, IDLE_TIME_SUM number,
constraint f_screen_instance_pk primary key (day_id, partial_id)
) organization index;
create table F_SCREEN_INSTANCE_BUF(DAY_ID number, PARTIAL_ID number, ID number, AGENT_USER_ID number,COMPUTER_ID number, RAW_APPLICATION_ID number, APP_USER_ID number,APPLICATION_ID number, USER_ID number, RAW_MODULE_ID number, MODULE_ID number, START_TIME date, RAW_SCREEN_NAME varchar2(100), SCREEN_ID number, SCREEN_TYPE number, ACTIVE_TIME_SUM number, IDLE_TIME_SUM number,
constraint f_screen_instance_buf_pk primary key (day_id, partial_id)
);
begin
dbms_stats.set_table_stats(user, 'F_SCREEN_INSTANCE', numrows => 500000000);
dbms_stats.set_table_stats(user, 'F_SCREEN_INSTANCE_BUF', numrows => 100000);
end;
/
The Problem
The desired join and join order can be achieved with the LEADING hint when an inner join is used. The smaller table, F_SCREEN_INSTANCE_BUF, is the second table.
explain plan for
select /*+ use_merge(t s) leading(t s) */ *
from f_screen_instance_buf s
join f_screen_instance t
on (s.DAY_ID = t.DAY_ID and s.PARTIAL_ID = t.PARTIAL_ID);
select * from table(dbms_xplan.display(format => '-predicate'));
Plan hash value: 563239985
-----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 19M| | 6898 (66)| 00:00:01 |
| 1 | MERGE JOIN | | 100K| 19M| | 6898 (66)| 00:00:01 |
| 2 | INDEX FULL SCAN | F_SCREEN_INSTANCE_PK | 500M| 46G| | 4504 (100)| 00:00:01 |
| 3 | SORT JOIN | | 100K| 9765K| 26M| 2393 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL| F_SCREEN_INSTANCE_BUF | 100K| 9765K| | 34 (6)| 00:00:01 |
-----------------------------------------------------------------------------------------------------
The LEADING hint does not work when changing to a left join.
explain plan for
select /*+ use_merge(t s) leading(t s) */ *
from f_screen_instance_buf s
left join f_screen_instance t
on (s.DAY_ID = t.DAY_ID and s.PARTIAL_ID = t.PARTIAL_ID);
select * from table(dbms_xplan.display(format => '-predicate'));
Plan hash value: 1472690071
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 19M| | 16M (1)| 00:10:34 |
| 1 | MERGE JOIN OUTER | | 100K| 19M| | 16M (1)| 00:10:34 |
| 2 | TABLE ACCESS BY INDEX ROWID| F_SCREEN_INSTANCE_BUF | 100K| 9765K| | 826 (0)| 00:00:01 |
| 3 | INDEX FULL SCAN | F_SCREEN_INSTANCE_BUF_PK | 100K| | | 26 (0)| 00:00:01 |
| 4 | SORT JOIN | | 500M| 46G| 131G| 16M (1)| 00:10:34 |
| 5 | INDEX FAST FULL SCAN | F_SCREEN_INSTANCE_PK | 500M| 46G| | 2703 (100)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
This limitation is not documented as far as I can tell. I tried using the +outline setting of DBMS_XPLAN to see the full set of hints and then changed them around. But nothing I did could make the join order change for the LEFT JOIN version. Perhaps someone else can get this to work.
select * from table(dbms_xplan.display(format => '-predicate +outline'));
...
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
USE_MERGE(#"SEL$0E991E55" "T"#"SEL$1")
LEADING(#"SEL$0E991E55" "S"#"SEL$1" "T"#"SEL$1")
INDEX_FFS(#"SEL$0E991E55" "T"#"SEL$1" ("F_SCREEN_INSTANCE"."DAY_ID" "F_SCREEN_INSTANCE"."PARTIAL_ID"))
INDEX(#"SEL$0E991E55" "S"#"SEL$1" ("F_SCREEN_INSTANCE_BUF"."DAY_ID"
"F_SCREEN_INSTANCE_BUF"."PARTIAL_ID"))
OUTLINE(#"SEL$9EC647DD")
OUTLINE(#"SEL$2")
MERGE(#"SEL$9EC647DD")
OUTLINE_LEAF(#"SEL$0E991E55")
ALL_ROWS
DB_VERSION('12.1.0.1')
OPTIMIZER_FEATURES_ENABLE('12.1.0.1')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/
Possible Solution
--#3: Join the large table to the smaller result set. This uses the largest table twice,
--but the plan can use the ROWID for a very quick join.
explain plan for
merge into F_SCREEN_INSTANCE t
using
(
--#2: Now get the missing rows with an outer join. Since the _BUF table is
--small I assume it does not make a big difference exactly how it it joind
--to the 100K result set.
--The hints NO_MERGE and NO_PUSH_PRED are required to keep the INNER_JOIN
--inline view intact.
select /*+ no_merge(inner_join) no_push_pred(inner_join) */ inner_join.*
from f_screen_instance_buf s
left join
(
--#1: Get 100K rows efficiently with an inner join.
--Note that the ROWID is retrieved here.
select /*+ use_merge(t s) leading(t s) */ s.*, s.rowid s_rowid
from f_screen_instance_buf s
join f_screen_instance t
on (s.DAY_ID = t.DAY_ID and s.PARTIAL_ID = t.PARTIAL_ID)
) inner_join
on (s.DAY_ID = inner_join.DAY_ID and s.PARTIAL_ID = inner_join.PARTIAL_ID)
) s
on (s.s_rowid = t.rowid)
when matched then update set
t.ACTIVE_TIME_SUM = t.ACTIVE_TIME_SUM + s.ACTIVE_TIME_SUM,
t.IDLE_TIME_SUM = t.IDLE_TIME_SUM + s.IDLE_TIME_SUM
when not matched then insert values (
s.DAY_ID, s.PARTIAL_ID, s.ID, s.AGENT_USER_ID, s.COMPUTER_ID, s.RAW_APPLICATION_ID, s.APP_USER_ID, s.APPLICATION_ID, s.USER_ID, s.RAW_MODULE_ID, s.MODULE_ID, s.START_TIME, s.RAW_SCREEN_NAME, s.SCREEN_ID, s.SCREEN_TYPE, s.ACTIVE_TIME_SUM, s.IDLE_TIME_SUM);
It ain't pretty, but at least it generates a plan with the large table first in the sort merge join.
select * from table(dbms_xplan.display);
Plan hash value: 1086560566
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 500G| 173T| | 5355K (43)| 00:03:30 |
| 1 | MERGE | F_SCREEN_INSTANCE | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN OUTER | | 500G| 179T| 29M| 5355K (43)| 00:03:30 |
|* 4 | HASH JOIN OUTER | | 100K| 28M| 3712K| 8663 (53)| 00:00:01 |
| 5 | INDEX FAST FULL SCAN| F_SCREEN_INSTANCE_BUF_PK | 100K| 2539K| | 9 (0)| 00:00:01 |
| 6 | VIEW | | 100K| 25M| | 6898 (66)| 00:00:01 |
| 7 | MERGE JOIN | | 100K| 12M| | 6898 (66)| 00:00:01 |
| 8 | INDEX FULL SCAN | F_SCREEN_INSTANCE_PK | 500M| 12G| | 4504 (100)| 00:00:01 |
|* 9 | SORT JOIN | | 100K| 9765K| 26M| 2393 (1)| 00:00:01 |
| 10 | TABLE ACCESS FULL| F_SCREEN_INSTANCE_BUF | 100K| 9765K| | 34 (6)| 00:00:01 |
| 11 | INDEX FAST FULL SCAN | F_SCREEN_INSTANCE_PK | 500M| 46G| | 2703 (100)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("INNER_JOIN"."S_ROWID"=("T".ROWID(+)))
4 - access("S"."PARTIAL_ID"="INNER_JOIN"."PARTIAL_ID"(+) AND
"S"."DAY_ID"="INNER_JOIN"."DAY_ID"(+))
9 - access("S"."DAY_ID"="T"."DAY_ID" AND "S"."PARTIAL_ID"="T"."PARTIAL_ID")
filter("S"."PARTIAL_ID"="T"."PARTIAL_ID" AND "S"."DAY_ID"="T"."DAY_ID")
Say we have two tables, TEST and TEST_CHILDS in the following way:
creat TABLE TEST(id1 number PRIMARY KEY, word VARCHAR(50),numero number);
creat TABLE TEST_CHILD (id2 number references test(id), word2 VARCHAR(50));
CREATE INDEX TEST_IDX ON TEST_CHILD(word2);
CREATE INDEX TEST_JOIN_IDX ON TEST_CHILD(id);
insert into TEST SELECT ROWNUM,U1.USERNAME||U2.TABLE_NAME, LENGTH(U1.USERNAME) FROM ALL_USERS U1,ALL_TABLES U2;
INSERT INTO TEST_CHILD SELECT MOD(ROWNUM,15000)+1,U1.USER_ID||U2.TABLE_NAME FROM ALL_USERS U1,ALL_TABLES U2;
We would like to query to get rows from TEST table that satisfy some criteria in the child table, so we go for:
SELECT /*+FIRST_ROWS(10)*/* FROM TEST T WHERE EXISTS (SELECT NULL FROM TEST_CHILD TC WHERE word2 like 'string%' AND TC.id = T.id ) AND ROWNUM < 10;
We always want just the first 10 results, not any more at all. Therefore, we would like to get the same response time to read 10 results whether table has 10 matching values or 1,000,000; since it could get 10 distinct results from the child table and get the values on the parent table (or at least that is the plan that we would like). But when checking the actual execution plan we see:
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 54 | 5 (20)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 1 | 54 | 5 (20)| 00:00:01 |
| 4 | SORT UNIQUE | | 1 | 23 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| TEST_CHILD | 1 | 23 | 3 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | TEST_IDX | 1 | | 2 (0)| 00:00:01 |
|* 7 | INDEX UNIQUE SCAN | SYS_C005145 | 1 | | 0 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID | TEST | 1 | 31 | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<10)
6 - access("WORD2" LIKE 'string%')
filter("WORD2" LIKE 'string%')
7 - access("TC"."ID"="T"."ID")
SORT UNIQUE under the STOPKEY, what afaik means that it is reading all results from the child table, making the distinct to finally select only the first 10, making the query not as scalable as we would like it to be.
Is there any mistake in my example?
Is it possible to improve this execution plan so it scales better?
The SORT UNIQUE is going to find and sort all of the records from TEST_CHILD that matched 'string%' - it is NOT going to read all results from child table. Your logic requires this. IF you only picked the first 10 rows from TEST_CHILD that matched 'string%', and those 10 rows all had the same ID, then your final results from TEST would only have 1 row.
Anyway, your performance should be fine as long as 'string%' matches a relatively low number of rows in TEST_CHILD. IF your situation is such that 'string%' often matches a HUGE record count on TEST_CHILD, there's not much you can do to make the SQL more performant given the current tables. In such a case, if this is a mission-critical SQL, with performance tied to your annual bonus, there's probably some fancy footwork you could do with MATERIALIZED VIEWs to, e.g. pre-compute 10 TEST rows for high-cardinality WORD2 values in TEST_CHILD.
One final thought - a "risky" solution, but one which should work if you don't have thousands of TEST_CHILD rows matching the same TEST row, would be the following:
SELECT *
FROM TEST
WHERE ID1 IN
(SELECT ID2
FROM TEST_CHILD
WHERE word2 like 'string%'
AND ROWNUM < 1000)
AND ROWNUM <10;
You can adjust 1000 up or down, of course, but if it's too low, you risk finding less than 10 distinct ID values, which would give you final results with less than 10 rows.
We're using a PL/SQL table (named pTable) to collect a number of ids to be updated.
However, the statement
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (pTable));
takes a long time to execute.
It seems that the optimizer comes up with a very bad execution plan, instead of using the index that is defined on id (as the primary key) it decides to use a full table scan on the aTable. pTable usually contains very few values (in most cases just one).
What can we do to make this faster? The best we've come up with is to handle low pTable.Count (1 and 2) as special cases, but that is certainly not very elegant.
Thanks for all the great suggestions. I wrote about this issue in my blog at http://smartercoding.blogspot.com/2010/01/performance-issues-using-plsql-tables.html.
You can try the cardinality hint. This is good if you know (roughly) the number of rows in the collection.
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT /*+ cardinality( pt 10 ) */
COLUMN_VALUE
FROM TABLE (pTable) pt );
Here's another approach. Create a temporary table:
create global temporary table pTempTable ( id int primary key )
on commit delete rows;
To perform the update, populate pTempTable with the contents of pTable and execute:
update
(
select aColumn
from aTable aa join pTempTable pp on aa.id = pp.id
)
set aColumn = 1;
The should perform reasonably well without resorting to optimizer hints.
The bad execution plan is probably unavoidable (unfortunately). There is no statistics information for the PL/SQL table, so the optimizer has no way of knowing that there are few rows in it. Is it possible to use hints in an UPDATE? If so, you might force use of the index that way.
It helped to tell the optimizer to use the "correct" index instead of going on a wild full-table scan:
UPDATE /*+ INDEX(aTable PK_aTable) */aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (CAST (pdarllist AS list_of_keys)));
I couldn't apply this solution to more complicated scenarios, but found other workarounds for those.
You could try adding a ROWNUM < ... clause.
In this test a ROWNUM < 30 changes the plan to use an index.
Of course that depends on your set of values having a reasonable maximum size.
create table atable (acolumn number, id number);
insert into atable select rownum, rownum from dual connect by level < 150000;
alter table atable add constraint atab_pk primary key (id);
exec dbms_stats.gather_table_stats(ownname => user, tabname => 'ATABLE');
create type type_coll is table of number(4);
/
declare
v_coll type_coll;
begin
v_coll := type_coll(1,2,3,4);
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (v_coll));
end;
/
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
UPDATE ATABLE SET ACOLUMN = 1 WHERE ID IN (SELECT COLUMN_VALUE FROM TABLE (:B1 ))
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | 142 (100)| |
| 1 | UPDATE | ATABLE | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 1 | 11 | 142 (8)| 00:00:02 |
| 3 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
| 4 | TABLE ACCESS FULL | ATABLE | 150K| 1325K| 108 (6)| 00:00:02 |
----------------------------------------------------------------------------------------------
declare
v_coll type_coll;
begin
v_coll := type_coll(1,2,3,4);
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (v_coll)
where rownum < 30);
end;
/
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------
UPDATE ATABLE SET ACOLUMN = 1 WHERE ID IN (SELECT COLUMN_VALUE FROM TABLE (:B1 ) WHERE
ROWNUM < 30)
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | 31 (100)| |
| 1 | UPDATE | ATABLE | | | | |
| 2 | NESTED LOOPS | | 1 | 22 | 31 (4)| 00:00:01 |
| 3 | VIEW | VW_NSO_1 | 29 | 377 | 29 (0)| 00:00:01 |
| 4 | SORT UNIQUE | | 1 | 58 | | |
|* 5 | COUNT STOPKEY | | | | | |
| 6 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
|* 7 | INDEX UNIQUE SCAN | ATAB_PK | 1 | 9 | 0 (0)| |
---------------------------------------------------------------------------------------------------
I wonder if the MATERIALIZE hint in the subselect from the PL/SQL table would force a temp table instantiation and help the optimizer?
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT /*+ MATERIALIZE */ COLUMN_VALUE
FROM TABLE (pTable));