Why Oracle changes rowid with fetch? - oracle

I have a query like this:
select w.rowid, w.waclogin
from tableA w, tableB wa, tableC a
where wa.alucod = a.alucod
and w.waclogin = wa.waclogin
and a.cpf = '31808013875'
and rownum <= 1;
The results are:
ROWID WACLOGIN
AAA0CEAHSAABE07ABA 31808013875
But when I use fetch (for performance) the rowid returned is different:
select w.rowid, w.waclogin
from tableA w, tableB wa, tableC a
where wa.alucod = a.alucod
and w.waclogin = wa.waclogin
and a.cpf = '31808013875'
fetch first row only;
Results in:
ROWID WACLOGIN
AAA0DMAHaAAA+ZcAAX 31808013875
Why fetch changes the rowid?
For me this no makes sense.
Update
When fetch is used, that row id returned is from table B, instead of table A.

There are two rows in tableA with the same wacLogin value (but obviously different rowID values). Neither of your queries specifies an order by so which of those rows is returned is arbitrary. Presumably, there is a slightly different query plan being used for both queries so each one returns a different arbitrary row. Of course, tomorrow, either or both queries could start returning a different arbitrary row if the query plan or physical organization of the table changes. If you want the same row to be returned in both cases, you'd need to make both queries deterministic with an order by clause that uniquely orders the results.

Related

Consecutive JOIN and aliases: order of execution

I am trying to use FULLTEXT search as a preliminary filter before fetching data from another table. Consecutive JOINs follow to further refine the query and to mix-and-match rows (in reality there are up to 6 JOINs of the main table).
The first "filter" returns the IDs of the rows that are useful, so after joining I have a subset to continue with. My issue is performance, however, and my lack of understanding of how the SQL query is executed in SQLite.
SELECT *
FROM mytbl AS t1
JOIN
(SELECT someid
FROM myftstbl
WHERE
myftstbl MATCH 'MATCHME') AS prior
ON
t1.someid = prior.someid
AND t1.othercol = 'somevalue'
JOIN mytbl AS t2
ON
t2.someid = prior.someid
/* Or is this faster? t2.someid = t1.someid */
My thought process for the query above is that first, we retrieve the matched IDs from the myftstbl table and use those to JOIN on the main table t1 to get a sub-selection. Then we again JOIN a duplicate of the main table as t2. The part that I am unsure of is which approach would be faster: using the IDs from the matches, or from t2?
In other words: when I refer to t1.someid inside the second JOIN, does that contain only the someids after the first JOIN (so only those at the intersection of prior and those for which t1.othercol = 'somevalue) OR does it contain all the original someids of the whole original table?
You can assume that all columns are indexed. In fact, when I use one or the other approach, I find with EXPLAIN QUERY PLAN that different indices are being used for each query. So there must be a difference between the two.
The query should be simplified to
SELECT *
FROM mytbl AS t1
JOIN myftstbl USING (someid) -- or ON t1.someid = myftstbl.someid
JOIN mytbl AS t2 USING (someid) -- or ON t1.someid = t2.someid
WHERE myftstbl.{???} MATCH 'MATCHME' -- replace {???} with correct column name
AND t1.othercol = 'somevalue'
PS. The query logic is not clear for me, so it is saved as-is.

Query to get Unique Indexes having NOT NULL columns - Oracle

Currently I am trying to find all the unique indexes defined in a table which are NOT NULL for Oracle database. What I mean by that is, Oracle allows creating unique indexes on columns which are even defined as NULL.
So if my table has two unique indexes, I want to retrieve the particular unique index which is having all the columns having the NOT NULL constraints.
I did come up with this query:
select ind.index_name, ind_col.column_name, ind.index_type, ind.uniqueness
from sys.dba_indexes ind
inner join sys.dba_ind_columns ind_col on ind.owner = ind_col.index_owner and ind.index_name = ind_col.index_name
where ind.owner in ('ISADRM') and ind.table_name in ('TH_RHELOR') and ind.uniqueness IN ('UNIQUE')
The above query is giving me all the unique indexes with the associated columns, but I am not sure, how should I join the above query with ALL_TAB_COLS which has the NULLABILITY data for all the columns of a table.
I tried joining this table with indexes and tried subquery as well, but not getting appropriate results.
Hence, would request you to please comment on same.
Analytic functions and inline views can help.
The analytic functions let you return detailed data but also create a summary on that data, based on separate windows. The detailed results include index owner, index name, and column name, but the counts are only per index owner and index name.
The first inline view joins the three tables, returns the detailed information, and has analytic functions to generate the count of all columns and the count of all nullable columns. The second inline view only selects rows where those two counts are equal.
--Unique indexes and columns where every column is NOT NULL.
select owner, index_name, column_name
from
(
--All relevant columns and counts of columns and not null columns.
select
dba_indexes.owner,
dba_indexes.index_name,
dba_tab_columns.column_name,
dba_tab_columns.nullable,
count(*) over (partition by dba_indexes.owner, dba_indexes.index_name) total_columns,
sum(case when nullable = 'N' then 1 else 0 end)
over (partition by dba_indexes.owner, dba_indexes.index_name) total_not_null_columns
from dba_indexes
join dba_ind_columns
on dba_indexes.owner = dba_ind_columns.index_owner
and dba_indexes.index_name = dba_ind_columns.index_name
join dba_tab_columns
on dba_ind_columns.table_name = dba_tab_columns.table_name
and dba_ind_columns.column_name = dba_tab_columns.column_name
where dba_indexes.owner = user
and dba_indexes.uniqueness = 'UNIQUE'
order by 1,2,3
)
where total_columns = total_not_null_columns
order by 1,2,3;
Analytic functions and inline views are tricky but they're very powerful once you learn how to use them.

Insert Statement Returns ORA-01427 Error While Trying To Insert From Multiple Tables

I have this table F_Flight which I am trying to insert into from 3 different tables. The first, fourth and fifth columns are from the same, and the second and third columns from different tables. When I execute the code, I get a "single-row subquery returns more than one row" error.
insert when 1 = 1 then into F_Flight (planeid, groupid, dateid, flightduration, kmsflown) values
(planeid, (select b.groupid from BridgeTable b where exists (select p.p1id from pilotkeylookup p where b.pilotid = p.p1id)),
(select dd.id from D_Date dd where exists (select p.launchtime from PilotKeyLookup p where dd."Date" = p.launchtime)),
flightduration, kmsflown) select * from PilotKeyLookup p;
Your subqueries get multiple rows back, which is what the error message says. There is no correlation between the various bits of data and subqueries you're trying to insert into a single row.
This can be done as a much simpler insert...select with joins, something like:
insert into f_flight (planeid, groupid, dateid, flightduration, kmsflown)
select pkl.planeid, bt.groupid, dd.id, pkl.flightduration, pkl.kmsflown
from pilotkeylookup pkl
join bridgetable bt on bt.pilotid = pkl.p1id
join d_date dd on dd."Date" = pkl.launchtime;
This joins the main PilotKeyLookup table to the other two on the keys you used in your subqueries.
Storing an ID value instead of an actual date is unusual, and if launchtime has a time component - which seems likely from the name - and your d_date entries are just dates (i.e. all with time at midnight) then you won't find matches; you might need to do:
join d_date dd on dd."Date" = trunc(pkl.launchtime);
It also seems like this could be a view, as you're storing duplicate data - everything in f_flight could, obviously, be found from the other tables.

Worse query plan with a JOIN after ANALYZE

I see that running ANALYZE results in significantly poor performance on a particular JOIN I'm making between two tables.
Suppose the following schema:
CREATE TABLE a ( id INTEGER PRIMARY KEY, name TEXT );
CREATE TABLE b ( a NOT NULL REFERENCES a, value INTEGER, PRIMARY KEY(a, b) );
CREATE VIEW ab AS SELECT a.name, b.text, MAX(b.value)
FROM a
JOIN b ON b.a = a.id;
GROUP BY a.id
ORDER BY a.name
Table a is approximately 10K rows, table b is approximately 48K rows (~5 rows per row in table a).
Before ANALYZE
Now when I run the following query:
SELECT * FROM ab;
The query plan looks as follows:
1|0|0|SCAN TABLE b
1|1|1|SEARCH TABLE a USING INTEGER PRIMARY KEY (rowid=?)
This is a good plan, b is larger and I want it to be in the outer loop, making use of the index in table a. It finishes well within a second.
After ANALYZE
When I execute the same query again, the query plan results in two table scans:
1|0|1|SCAN TABLE a
1|1|0|SCAN TABLE b
This is far for optimal. For some reason the query planner thinks that an outer loop of 10K rows and an inner loop of 48K rows is a better fit. This takes about 1.5 minute to complete.
Should I adapt the index in table b to make it work after ANALYZE? Anything else to change to the indexing/schema?
I just try to understand the problem here. I worked around it using a CROSS JOIN, but that feels dirty and I don't really understand why the planner would go with a plan that is orders of magnitude slower than the un-analyzed plan. It seems to be related to GROUP BY, since the query planner puts table b in the outer loop without it (but that renders the query useless for what I want).
Accidentally found the answer by adjusting the GROUP BY clause in the view definition. Instead of joining on a.id, I group on b.a instead, although they have the same values.
CREATE VIEW ab AS SELECT a.name, b.text, MAX(b.value)
FROM a
JOIN b ON b.a = a.id;
GROUP BY b.a -- <== changed this from a.id to b.a
ORDER BY a.name
I'm still not entirely sure what the difference is, since it groups the same data.

Hint oracle to use indexes on the subquery -- Oracle SQl

I have a query as follows
select *
from
( select id,sum(amt) amt from table_t group by id
) t inner join table_v v on (v.id = t.id)
order by t.amt desc;
table_t has no index and has 738,000 rows and table_v has an index on id and has 158,000 rows.
The query currently fetches the results in 10 seconds.
The explain query plan shows a full table scan.. How can I improve the performance here ?
If I add an index on id for table_t will it help. Because I am using it in a subquery ?
If you have an index on (id,amt) you would minimise the work in the group by/summation process (as it could read the index). If both columns are nullable then you may need to add a "where id is not null" so it will use the index. [That's implied by the later join on id, but may not get inferred by the optimizer.]
Next step would be to use a materialized view for the summation, maybe with an index on (amt,id) (which it could use to avoid the sort). But that is refreshed either at a commit or on request or at scheduled intervals. It doesn't help if you need to do this query as part of a transaction.
Both the index and the materialized view would add work to inserts/updates/deletes on the table but save work in this query.

Resources