I've got this really nasty view that I'm trying to make faster by performing some joins ahead of time via materialized views. My problem is the most expensive joins, and therefore most worthwhile to pre-execute, don't play nice with materialized views.
Goal of the application is to provide livest data possible, so if I make mat views, they need to fast refresh on commit(maybe I haven't considered other approaches I'm unaware of). Fast refresh has limitations, specifically you must have rowid. See this thread here; but my problem is a little different as the nature of my join requires me to aggregate my join to get the right record.
Here's what I want to "pre-execute" (or optimize another genius way):
CREATE MATERIALIZED VIEW testing
NO LOGGING
CACHE
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
AS
SELECT br.id, br.rowid, max(mr.id) as modifier_id --somehow fit mr.rowid in here
FROM tableA br --base record
LEFT OUTER JOIN tableA mr --modifier record
ON br.external_key = mr.external_key
AND mr.record_type_code in ('SOME','TYPE')
AND mr.status_code in ('SOME','STATUS');
Basically, it's a self-join, because 0-*n* modifications get made to the entity, all of which are done in subsequent rows in the same table. I'm selecting the most recent of a given type. (I do this additional times for other types). To get the above working, I'd have to include rowid of both br and mr, which I can't wrap my brain around a way to do. I've considered rank() and ROWNUM instead of aggregating w/ MAX(), but can't get the logic right.
EDIT:
Not sure fast refresh MV is in the cards for me as even if I make the refresh on demand and remove the aggregation entirely (assume there is exactly 1 row), oracle tells me the query is too complex for a fast refresh. So, now I'm in need of other ideas...
It might not be applicable in your situation, but possibly you could denormalize your table.
For example, if you have multiple language dependent names, you could just have named columns for each language.
For example, if your access is index-based, consider varray or nested tables.
Another idea is to use triggers: On insert/update/delete, update another table (or tables), and use that table for the query. Possibly you can pre-calculate aggregates this way as well.
I would look into using a materialised view to do the aggregation only, so you're just storing EXTERNAL_KEY and MAX(ID).
If you have deletes occurring on the master table then include count(*) as well.
That should give you fast refresh capability.
Related
My case is that a third party prepares a table in our schema domain on which we run different spring batch jobs that look for mutations (diff between the given third party table and our own tables). This table will contain about 200k records on average.
My question is simply: does generating a material view up front provide any benefits vs running the query at runtime?
Since the third party table will be populated on our command (basically it's a db boolean field that is set to 1, after which a scheduler picks it up to populate the table. Don't ask me why it's done this way), the query needs to run anyway.
Obviously from an application point of view, it seems more performant to query a flat material view. However, I'm not sure if there is any real performance benefit, since the material view needs to be built on db level.
Thanks.
The benefit of a materialized view here is if you are running the multiple times (more so if the query is expensive and / or there is a big drop in cardinality).
If you are only hitting the query once then you there isn't going to be a huge amount in it. You are running the same query either way and you have the overhead of inserting into the materialized view but you also have the benefit that you can tune this a lot easier than you could querying via JPA and could apply things like compression so less data is transferred back to the application but for 200k rows any difference is likely to be small.
All in all, unless you are running the same query multiple times then I wouldn't bother.
Update
One other thing to consider is coupling. Referencing a materialized view directly in JPA would allow you to update any logic without updating the application but the flip side of this is that logic is hidden outside the application which can make debugging a pain.
Also if you are just referencing a materialized view directly and not using any query rewrite or rollup features then am simple table created via CTAS would actually be better as you still have the precomputed data without the (small) overhead of maintaining the materialized view.
I am trying to prepare DB design for APEX application. Requirement is as follows.
In Departments IR page, users are asking below columns
Number of employees in each department (Department may or may not have employees)
Primary Location for Department (Department can have multiple addresses and addresses are stored in other table, along with primary flag)
Alternative Manager's Email Address for Department (alt_manager_id column, this is optional column and refers to employees table)
I can implement these requirements using either inline sub queries or using OUTER JIONs. But, these approaches will have performance impact as the data grows (like 100s of thousands of rows). So, my question is, is it ok to store these data directly at "Departments" table and update "Departments" table when child tables gets updated. Basically, I am trying to store summary data at master table, instead of deriving it as on when needed from child tables. Is this considered bad practice? Is it ok to implement such DB design?
Thank you
"Is this considered bad practice?"
Usually yes. There are several problems with maintaining summary detail information in a master record.
Your inserts into child tables (and deletes if you have them) now also have to take a lock on the master record, to increment the count. This adds complexity to what should be simple transactions.
It also has two performance hits: the additional overhead of maintaining the counts and the potential for sessions to hang in multi-user environments.
Note that you are adding a definite performance hit to your insert activity for a possible saving in the performance of aggregating queries.
The good practice is to just run the counts when you need the summaries. Tune the queries if you need to.
If you think you really are going to be querying the summary data often enough for the workload to be a problem you should consider building materialized views for the summary queries. Then, when you enable query rewrites, Oracle will transparently query the materialized view if it can satisfy the query rather than re-running the aggregations. This is a technique which is used a lot in data warehouses, but there's no reason not to use it in OLTP environments if you really have the data volumes to justify it. Find out more.
Generally, try the simplest thing which could work first. Only look to do something different (like building a materialized view for aggregations) when you know you have a demonstrable problem with performance.
I would like to write a MERGE statement in Vertica database.
I know it can't be used directly, and insert/update has to be
combined to get the desired effect.
The merge sentence looks like this:
MERGE INTO table c USING (select b.field1,field2 aeg from table a, table b
where a.field3='Y'
and a.field4=b.field4
group by b.field1) t
on (c.field1=t.field1)
WHEN MATCHED THEN
UPDATE
set c.UUS_NAIT=t.field2;
Would just like to see an example of MERGE being used as insert/update.
You really don't want to do an update in Vertica. Inserting is fine. Selects are fine. But I would highly recommend staying away from anything that updates or deletes.
The system is optimized for reading large amounts of data and for inserting large amounts of data. So since you want to do an operation that does 1 of the 2 I would advise against it.
As you stated, you can break apart the statement into an insert and an update.
What I would recommend, not knowing the details of what you want to do so this is subject to change:
1) Insert data from an outside source into a staging table.
2) Perform and INSERT-SELECT from that table into the table you desire using the criteria you are thinking about. Either using a join or in two statements with subqueries to the table you want to test against.
3) Truncate the staging table.
It seems convoluted I guess, but you really don't want to do UPDATE's. And if you think that is a hassle, please remember that what causes the hassle is what gives you your gains on SELECT statements.
If you want an example of a MERGE statement follow the link. That is the link to the Vertica documentation. Remember to follow the instructions clearly. You cannot write a Merge with WHEN NOT MATCHED followed and WHEN MATCHED. It has to follow the sequence as given in the usage description in the documentation (which is the other way round). But you can choose to omit one completely.
I'm not sure, if you are aware of the fact that in Vertica, data which is updated or deleted is not really removed from the table, but just marked as 'deleted'. This sort of data can be manually removed by running: SELECT PURGE_TABLE('schemaName.tableName');
You might need super user permissions to do that on that schema.
More about this can be read here: Vertica Documentation; Purge Data.
An example of this from Vertica's Website: Update and Insert Simultaneously using MERGE
I agree that Merge is supported in Vertica version 6.0. But if Vertica's AHM or epoch management settings are set to save a lot of history (deleted) data, it will slow down your updates. The update speeds might go from what is bad, to worse, to horrible.
What I generally do to get rid of deleted (old) data is run the purge on the table after updating the table. This has helped maintain the speed of the updates.
Merge is useful where you definitely need to run updates. Especially incremental daily updates which might update millions of rows.
Getting to your answer: I don't think Vertica supportes Subquery in Merge. You would get the following.
ERROR 0: Subquery in MERGE is not supported
When I had a similar use-case, I created a view using the sub-query and merged into the destination table using the newly created view as my source table. That should let you keep using MERGE operations in Vertica and regular PURGEs should let you keep your updates fast.
In fact merge also helps avoid duplicate entries during inserts or updates if you use the correct combination of fields in ON clause, which should ideally be a join on the primary keys.
I like geoff's answer in general. It seems counterintuitive, but you'll have better results creating a new table with the rows you want in it versus modifying an existing one.
That said, doing so would only be worth it once the table gets past a certain size, or past a certain number of UPDATEs. If you're talking about a table <1mil rows, I might chance it and do the updates in place, and then purge to get rid of tombstoned rows.
To be clear, Vertica is not well suited for single row updates but large bulk updates are much less of an issue. I would not recommend re-creating the entire table, I would look into strategies around recreating partitions or bulk updates from staging tables.
If I have a VIEW with a bunch of INNER JOINs but I query against that VIEW SELECTing only columns that come from the main table, will SQL Server ignore the unnecessary joins in the VIEW while executing or do those joins still need to happen for some reason?
If it makes a different, this is on SQL Server 2008 R2. I know in either case that this is already not a great solution but but I'm attempting to find the lesser of 2 evils.
It might ignore the joins if they don't actually change the semantics. One example of this might be if you have a trusted foreign key constraint between the tables and you are only selecting columns from the referencing table (See example 9 in this article).
You would need to check the execution plan to be sure for your specific case.
If you don't pull fields from those tables, it may be faster to use an EXISTS clause - this will also prevent duplicates from the JOINed table cause dupes in your results.
Even if the optimizer ignores unnecessary joins you should just create another view to handle your particular case. Use and abuse of views (such as this case) can get out of hand and lead to obfuscation, confusion and very significant performance issues.
You might even consider refactoring the view that you're planning on using by having it join a set of "smaller" views to deliver the same data set that it does now... if it makes sense to do that of course.
HI ,
I am going to rewrite a store procedure in LINQ.
What this sp is doing is joining 12 tables and get the data and insert it into another table.
it has 7 left outer joins and 4 inner joins.And returns one row of data.
Now question.
1)What is the best way to achieve this joins in linq.
2) do you think this affect performance (its only retrieving one row of data at a given point of time)
Please advice.
Thanks
SNA.
You might want to check this question for the multiple joins. I usually prefer lambda syntax, but YMMV.
As for performance: I doubt the query performance itself will be affected, but there may be some overhead in figuring out the execution plan, since it's such a complicated query. The biggest performance hit will likely be the extra database round trip you will need compared to the stored procedure. If I understand you correctly, your current SP does the SELECT AND INSERT all at once. Using LINQ to SQL or LINQ to Entities, you will need to fetch the data first before you can actually write them to the other table.
So, it depends on your usage if rewriting is warranted. Alternatively, you can add stored procedures to your data model. It will be exposed as a method on your data context.