How to use Oracle Materialzed View in a Dimensional Model - oracle

I have a dimensional model with a large fact table (millions of rows) which is range partitioned by date and smaller dimensional tables that are not partitioned. I came across materialized views which is often used in these scenarios to improve query performance.
Now, I want to know which way is better of the following two to utilize these materialized views to get aggregated reports.
A. Create one with the by joining the whole fact table with each of the dimension tables required.
create materialized view my_mview execute immediate query rewrite
select
fact.col1, dim1.col2, dim2.col3, sum(fact.col4)
from
my_fact fact
inner join
my_dim1 dim1
on fact.dim1_key = dim1.dim1_key
inner join
my_dim2 dim2
on fact.dim2_key = dim2.dim2_key group by fact.col1, dim1.col2, dim2.col3
This seems like the most basic way of using them. But it seems
rather limiting and I would require a new materialzed view for each
variation of the query I want to create.
B. Create it over the aggregation of the fact table and utilize the query rewrite when doing a dimensional join back.
create materialized view my_mview execute immediate query rewrite
select
col1, dim1.dim2_key, dim2.dim_key, sum(fact.col4)
from
my_fact fact
And do the join as above in case A, which will use this aggregated materialzed view for the join and not the whole fact table.
Can anyone tell me when I would use each case or the other?

Your first example works exactly as you described.
For the second example the query should be:
create materialized view my_mview execute immediate query rewrite
select
col1, fact.dim2_key, fact.dim_key, sum(fact.col4)
from
my_fact fact
group by
col1, fact.dim2_key, fact.dim_key
This will automatically speed up aggregates such as
select sum(fact.col4)
from fact
select fact.dim_key,sum(fact.col4)
from fact
group by fact.dim_key
select fact.dim2_key,sum(fact.col4)
from fact
group by fact.dim2_key
I don't think Oracle will rewrite your first type of query to this MV automatically because in the MV the join columns are already grouped by (They also should be grouped in your second example). It never happened for us. This however may also depend on if there are relationships defined between dim and fact table and the value of QUERY_REWRITE_INTEGRITY parameter, so there is still some room for testing here.
You may still get a performance gain by writing a query in a specific way
WITH preaggr as (
select
col1, fact.dim2_key, fact.dim_key, sum(fact.col4)
from
my_fact fact
group by
col1, fact.dim2_key, fact.dim_key
)
select
dim2.col1,
sum(preaggr.col4)
from
preaggr
join
dim2
on
preaggr.dim2_key = fact.dim2_key
group by
dim2.col1

Related

Chained CTEs in Redshift - How do I know which DIST KEY the CTE will inherit?

I have a view in Redshift which consists of lots of CTEs that are joined (chained) between each other. Inside these CTEs there are joins between multiple tables. If I then Join to a CTE that has a join of multiple tables inside where does the SORT KEY and DIST KEY for the Join from? How does Redshift decide which table in the join in the CTE, the CTE should inherit it's DIST KEY and SORT KEY from? If at all?
For example, tbl1 has a DIST KEY on tbl_key, tbl2 has a DIST KEY on tbl_id, tbl3 has DIST KEY on tbl_key.
First, I create a CTE which is the join of tbl1 and tbl2.
With cte1 as (
Select tbl1.col1, tbl2.col2
From tbl1
Join tbl2 on tbl1.job_no = tbl2.job_id )
Second, I create a CTE that joins to the first CTE
With cte2 as (
Select cte1.*, tbl3.col3
From cte1
Join tbl3 using (tbl_key))
Now my question is, does CTE1 have a DIST KEY on tbl1's DIST KEY of tbl_key or tbl2's DIST KEY of tbl_id? or both? or neither?
In Redshift, CTEs are just used to simplify the reading of sql. They are processed just the same as subqueries. i.e. they are not made physical and therefore do not have their own dist/sort key.
You could rewrite your code as
Select cte1.*, tbl3.col3
From (Select tbl1.col1, tbl2.col2
From tbl1
Join tbl2 on tbl1.job_no = tbl2.job_id
) as cte1
Join tbl3 using (tbl_key)
which can be simplified further as
Select tbl1.col1, tbl2.col2, tbl3.col3
from tbl1
join tbl2 on tbl1.job_no = tbl2.job_id
join tbl3 using (tbl_key)
If you are able to choose your dist/sort keys then you should consider which tables are the biggest and prioritise those accordingly.
for example if tbl1 and tbl2 are large then it may make sense to have them distributed as you described.
However, if tbl2 and tbl3 are both large, it may make sense to distribute both on tbl_key.
When you issue a query Redshift will compile and optimize that query as it sees fit to achieve the best performance and be logically equivalent. Your CTEs look like subqueries to the compile / optimization process and the order in which the joins are performed may have no relation to how you wrote the query.
Redshift makes these optimization choices based on the table metadata that is created / updated by ANALYZE. If you want Redshift to make smart choices on how to join your tables together you will want your table metadata to be up to date. The query plan (including join order and data distribution) is set at query compile, it is not dynamically determined during execution.
One of the choices Redshift makes is how the intermediate data of the query is distributed (your question) but remember that these intermediate results can be for a modified join order. To see what order that Redshift plans to join your tables look at the EXPLAIN plan for the query. The more tables you are joining and the more complex your query, the more choices Redshift has and the less likely it is that the EXPLAIN plan will join in the order you specified. I've worked on clients' queries with dozens of joins and many nested levels of subquery and the EXPLAIN plan is often very different than the original query as written.
So Redshift is trying to make smart choices about the join order and intermediate result distribution. For example it will usually join small tables to large tables first and keep the distribution of the large table. But here large and small are based on post WHERE clause filtering and the guesses Redshift can make based on metadata. The further join is away from the source table metadata (deep into the join tree) the more uncertain Redshift is about what the incoming and outgoing data of the join will look like.
Here the EXPLAIN plan can give you hints about what Redshift is "thinking" - if you see a DIST INNER join Redshift is moving the data of one table (or intermediate result set) to match the other. If you DIST BOTH then Redshift is redistributing both sets of data to some new distribution (usually one of the join on columns). It does this to avoid having only 1 slice with data and all others with nothing to do as this would be very inefficient.
To sum up to see what Redshift is planning to do with your joins look at the EXPLAIN plan. You can also infer some info about intermediate result distribution from the explain plan but is doesn't provide a complete map of what it plans to do.

Oracle `partition_by` in select clauses, does it create these partitions permantly?

I only have a superficial understanding on partitions in Oracle, but, I know you can create persistent partitions on Oracle, for example within a create table statement, but, when using partition by clauses within a select statement? Will Oracle create a persistent partition, for caching reasons or whatever, or will the partition be "temporary" in some sense (e.g., it will be removed at the end of the session, the query, or after some time...)?
For example, for a query like
SELECT col1, first_value(col2)
over (partition by col3 order by col2 nulls last) as colx
FROM tbl
If I execute that query, will Oracle create a partition to speed up the execution if I execute it again, tomorrow or three months later? I'm worry about that because I don't know if it could cause memory exhaustion if I abuse that feature.
partition by is used in the query(windows function) to fetch the aggregated result using the windows function which is grouped by the columns mentioned in the partition by. It behaves like group by but has ability to provide grouped result for each row without actually grouping the final outcome.
It has nothing to do with table/index partition.
scope of this partition by is just this query and have no impact on table structure.

indexed view vs temp table to improve performance of a seldom executed query

i have a slow query whose structure is
select
fields
from
table
join
manytables
join (select fields from tables) as V1 on V1 on V1.field = ....
join (select fields1 from othertables) as V2
join (select fields2 from moretables) as V3
The select subqueries in the last 3 joins are relatively simple but joins agains the, take time. If they were phisical tables it would be much better.
So i found out that i could turn the subqueries to indexed views or to temp tables.
By temp table i do not mean a table who is written hourly like explained here,
but a temp table who is created before the query execution
Now my doubt comes from the fact that indexed views are ok in datawarehouses since they impact the performance. This db is not a datawarehouse but a production db of a non data intense application.
But in my case the above query is executed not often, even if the underlaying tables (the tables whose data would become part of the indexed view) are used more often.
In this case is it ok to use indexed views? Or shuold i favor temp table?
Also table variable with primary key keyword is an alternative.

How to select row data as column in Oracle

I have two tables like bellow shows figures
I need to select records as bellow shown figure. with AH_ID need to join in second table and ATT_ID will be the column header and ATT_DTL_STR_VALUE need to get as that column relevant value
Required output
Sounds like you have an Entity-Attribute-Value data model which relational DBs aren't the best at modeling. You may want to look into a key-value store.
However, as Justin suggested, if you're using 11g you can use th pivot clause as follows:
SELECT *
FROM (
SELECT T1.AH_ID, T1.AH_DESCRIPTION, T2.ATT_ID, T2.ATT_DTL_STR_VALUE
FROM T1
LEFT OUTER JOIN T2 ON T1.AH_ID = T2.AH_ID
)
PIVOT (MAX(ATT_DTL_STR_VALUE) FOR (ATT_ID) IN (1));
This statement requires you to hard-code in ATT_ID however there are ways to do it dynamically. More info can be found here.

Oracle - Materialized View, can this MV be fast refreshable?

I have a materialized view that looks somewhat like the following and I'm wondering if there is anyway to have this materialized view 'fast' refreshable? Basically, I'm asking the following:
Can a materialized view contain oracle functions such as COALESCE, NVL, NVL2, etc and still be fast refreshable
Can a materialized view contain functions that I have made and still be fast refreshable.
Can a materialized view contain joins to derived tables and still be fast refreshable?
I checked the Oracle documentation about this, and it did not list these restrictions, however after testing the case below on my own system, I don't believe it is possible.
Oracle version: 10g
SELECT COALESCE (col1, col2),
myOracleFunction(col3, col4)
FROM tableA a
LEFT OUTER JOIN
(SELECT id, MAX (sample_key) prim_sam_key
FROM table_sample
GROUP BY id
HAVING COUNT (1) = 1) b ON a.id = b.id;
Requirements from the link you provided that you're missing:
COUNT(*) must be specified.
The SELECT list must contain all GROUP BY columns.
Also, the following requirement indicates that, for your query, a fast refresh will only be possible if table_sample has been updated, but tableA has not:
Materialized aggregate views with outer joins are fast refreshable
after conventional DML and direct loads, provided only the outer table
has been modified. Also, unique constraints must exist on the join
columns of the inner join table. If there are outer joins, all the
joins must be connected by ANDs and must use the equality (=)
operator.
Finally, when asking about materialized views, it is always a good idea to state exactly what materialized view logs you have created.

Resources