functionality of the TABLE clause in oracle - oracle

i'm struggling to understand what the TABLE clause does, per oracle docs:
it transforms a collection like a nested table into a table which could be used in an sql statement.
which seems clear enough but i don't know how it works in practice.
these are the relevant types and tables;
create type movies_type as Table of ref movie_type;
create type actor_type under person_type
(
starring movies_type
) Final;
create table actor of actor_type
NESTED TABLE starring STORE AS starring_nt;
i want to list actors and movies they starred in, this works
select firstname, lastname, value(b).title
from actor2 a, table(a.starring) b;
but i don't understand why. why isn't this
actor2 a, table(a.starring) b
a Cartesian product?
also, why does value(b) work here?, since it's table of refs, it should be deref, but that doesn't work.
my question is:
why does this query work as intended? i would expect it to list every actor with every movie (Cartesian product) as there are no specified conditions on how to join, and why does value(b) work here?, since it's table of refs, it should be deref, but that doesn't work.
i don't have a mental model for oracle sql, help is very much appreciated on how to learn properly.
thank you very much.

It’s not a Cartesian product because table(a.starring) is correlated by a: For each row in a it is running the table function against its starring nested table.
This is not a very common way of data modelling in Oracle, usually you would use a junction table to allow for a properly normalised model (which usually is much easier to query and allows for better for performance)

Related

PL/SQL: Looping through a list string

Please forgive me if I open a new thread about looping in PL/SQL but after reading dozens of existing ones I'm still not able to perform what I'd like to.
I need to run a complex query on a view of a table and the only way to shorten running time is to filter through a where clause based on a variable to which such table is indexed (otherwise the system ends up doing a full scan of the table which runs endlessly)
The variable the table is indexed on is store_id (string)
I can retrieve all the store_id I want to query from a separate table:
e.g select distinct store_id from store_anagraphy
Then I'd like to make a loop that iterate queries with the store_id identified above
e.g select *complex query from view_of_sales where store_id = 'xxxxxx'
and append (union) all the result returned by each of this queries
Thank you very much in advance.
Gianluca
In theory, you could write a pipelined table function that ran multiple queries in a loop and made a series of pipe row calls to return the results. That would be pretty unusual but it could be done.
It would be far, far more common, however, to simply combine the two queries and run a single query that returns all the rows you want
select something
from your_view
where store_id in (select distinct store_id
from store_anagraphy)
If you are saying that you have tried this query and Oracle is choosing to do a table scan rather than using the index then what you really have is a tuning problem. Most likely, statistics on one or more objects are inaccurate which leads Oracle to expect that this query would return more rows than it really will thus favoring the table scan. You should be able to fix that by fixing the statistics on the objects. In a pinch, you could also use hints to force an index to be used.

Why Access and Filter Predicates are the same here?

When I get the autotrace output of the query above using the Oracle SQL Developer, I see that the join condition is used for access and filter predicates. My question is, does it read all the department_ids from the DEPT_ID_PK and then use these IDs to access and filter the employees table? If so, why the employees table has full table scan? Why does it read the employees table again by using the department_ids of the departments table? Could anyone please read this execution plan step by step simply, and explain the reason why the access and filter predicates are used here?
Best Regards
it is a merge join (a bit like hash join, Merge join is used when projections of the joined tables are sorted on the join columns. Merge joins are faster and uses less memory than hash joins).
so Oracle do a full table scan of in outer table (EMPLOYEES) and the it read the inner table in a ordred manner.
the filtre predicates is the column on which the projection will be done
more details: https://datacadamia.com/db/oracle/merge_join
It uses the primary key to avoid sorting, otherwise the plan would be like this
The distinction between "Access predicates" and "Filter predicates" is not particularly consistent, so take them with healthy amount of skepticism. For example, if you remove the USE_MERGE hint, then there would be no Fiter Predicates in the plan any more, and the Access Predicates node would be relocated under the HASH_JOIN node (where it makes more sense for MERGE_JOIN as well):

Data consistency between Oracle tables

I have one big table A who has PK (C1, C2, C3) and many other columns, to make the select faster, a smaller table B was created with PK (C1, C2). So we can do a select by joining the two tables to find a row in A.
But the problem now is that it can happen that if data is corrupted in B which results in a joint select returns nothing but we still have a row in A.
Am I doing something wrong with this design and how can I ensure the data in those two tables are consistent?
Thanks a lot.
Standard way - if those tables are in a master-detail relationship - is to create a foreign key constraint which will prevent deleting master if details exist.
If you can fix it now, do it - then create the constraint.
If you can't, then create foreign key constraint using INITIALLY DEFERRED DEFERRABLE option so that current values aren't checked, but future DML will be.
Finally, to fetch data although certain rows don't exist any more, use outer join.
"Am I doing something wrong with this design"
Well it's hard to be sure without more details about your scenario but probably you just needed a non-unique index on A(C1, C2).
Although I would like to see some benchmarking which proves an index-range scan on your primary key index was not up to the job. Especially as it seems likely the join on table B is using that access path.
Performance tuning an Oracle database is a matter of understanding and juggling many variables. It's not just a case of "bung on another index". We need to understand what the database is actually doing and why the optimiser made that choice. So, please read this post on asking Oracle tuning questions which will give you some insight into how to approach query optimisation.

In Oracle I want to create a "routing interface" which insert into separate tables based on parameter

I need to find a solution to the following problem: there should be a common and single "interface" that I can use in an insert into statement, something like this: insert into INTERFACE (fields) select ...
But there are many tables with the same structure behind the interface which should decide based on list of values (coming in a field) where to put the data. The tables are partitioned by range interval (daily) right now.
I was thinking about having a composite partitioned table which cannot be SELECT-ed to avoid mixing different type of data in a single select query, but creating views on the top of it. In this case the table should be partitioned like this: partition by list FIELD subpartition by range interval. But oracle 12 does not support this.
Any idea how to solve this? (There is a reason why I need a single interface and why I have to store data separately.)
Thank you in advance!
The INSERT ALL syntax can help easily route data to specific tables based on conditions:
create table interface1(a number, b number);
create table interface2(a number, b number);
insert all
when a <= 1 then
into interface1
else
into interface2
select '1' a, 2 b from dual;

indexed view vs temp table to improve performance of a seldom executed query

i have a slow query whose structure is
select
fields
from
table
join
manytables
join (select fields from tables) as V1 on V1 on V1.field = ....
join (select fields1 from othertables) as V2
join (select fields2 from moretables) as V3
The select subqueries in the last 3 joins are relatively simple but joins agains the, take time. If they were phisical tables it would be much better.
So i found out that i could turn the subqueries to indexed views or to temp tables.
By temp table i do not mean a table who is written hourly like explained here,
but a temp table who is created before the query execution
Now my doubt comes from the fact that indexed views are ok in datawarehouses since they impact the performance. This db is not a datawarehouse but a production db of a non data intense application.
But in my case the above query is executed not often, even if the underlaying tables (the tables whose data would become part of the indexed view) are used more often.
In this case is it ok to use indexed views? Or shuold i favor temp table?
Also table variable with primary key keyword is an alternative.

Resources