Good afternoon friends,
I have some queries regarding oracle and what would be better to build star models (BI models).
which would be better to build dimensions and fact tables, materialized views or tables?
In order to have the best performance, I can think of some differences between materialized views and tables.
ADVANTAGE of Materialized Views:
VMs are easy to develop.
incremental load management is easy, it does not need much development.
DISADVANTAGE of materialized views:
for very large data volumes, it seems to me that they do not support partitioning.
I would appreciate any help or point of view that you would recommend to me, or if someone developed their entire star model with materialized views, what suggestions could they provide.
what is not clear to me is, although I can make the model with tables and materialized views, at what time or what would be the factor that determines the use of one or the other.
Thanks a lot,
Related
I am trying to study object-relational databases, and am having a really hard time to find information on it and understand the concept. I found only some examples. Probably because English is not my first language.
I want to be able to make an object-relational database design model in the form of a UML class diagram and then create the object-relational (SQL-99) tables in Oracle. And I don't know how to do that.
This is a navigational model example (I think) :
Navigational models is a legacy of obsolete databases tachnologies that have been supplanted by the relational model:
The prehistoric hierarchical model allowed to group records in a tree. So every record has pointer to the related records. Navigation goes top down and was as far as I know unidirectional.
The less prehistoric network model was an alternative where navigation between records was more powerful, and not necessarily limited to tree structures. It gained some popularity as it could relate records very efficiently in a time where relational databases were not very performant on smaller computers (remember Oracle 5 had a table locking mecanism!)
Both where based on fixed, structured records, and the navigation needed to be fixed up-front. THis is the past. Don't go-there if you want to learn modern ORM or NoSQL. The translation from an UML class diagram to a database model is straightforward. If you need to subdivide large classes into smaller one (like Employee and Address), it means that your classes are too large. Fix it in the original UML model: decompose the classes in smaller classes.
UML allow to make associations navigable (but it's not useful if you target a relational database in the end). THere are then a coupe of techniques that allow you to easily build the tables from that model, using for example identity fields, foreign key mapping (for one-to-one or one-to-many) or association table mapping (for many-to-many). But there are full articles or books on these techniques and it would be too long to develop this here.
P.S: you need to improve your English. French literature on the topic is too poor. I can tell from my own experience. ;-)
I did a bit R&D on the fact tables, whether they are normalized or de-normalized.
I came across some findings which make me confused.
According to Kimball:
Dimensional models combine normalized and denormalized table structures. The dimension tables of descriptive information are highly denormalized with detailed and hierarchical roll-up attributes in the same table. Meanwhile, the fact tables with performance metrics are typically normalized. While we advise against a fully normalized with snowflaked dimension attributes in separate tables (creating blizzard-like conditions for the business user), a single denormalized big wide table containing both metrics and descriptions in the same table is also ill-advised.
The other finding, which I also I think is ok, by fazalhp at GeekInterview:
The main funda of DW is de-normalizing the data for faster access by the reporting tool...so if ur building a DW ..90% it has to be de-normalized and off course the fact table has to be de normalized...
So my question is, are fact tables normalized or de-normalized? If any of these then how & why?
From the point of relational database design theory, dimension tables are usually in 2NF and fact tables anywhere between 2NF and 6NF.
However, dimensional modelling is a methodology unto itself, tailored to:
one use case, namely reporting
mostly one basic type (pattern) of a query
one main user category -- business analyst, or similar
row-store RDBMS like Oracle, SQl Server, Postgres ...
one independently controlled load/update process (ETL); all other clients are read-only
There are other DW design methodologies out there, like
Inmon's -- data structure driven
Data Vault -- data structure driven
Anchor modelling -- schema evolution driven
The main thing is not to mix-up database design theory with specific design methodology. You may look at a certain methodology through database design theory perspective, but have to study each methodology separately.
Most people working with a data warehouse are familiar with transactional RDBMS and apply various levels of normalization, so those concepts are used to describe working a star schema. What they're doing is trying to get you to unlearn all those normalization habits. This can get confusing because there is a tendency to focus on what "not" to do.
The fact table(s) will probably be the most normalized since they usually contain just numerical values along with various id's for linking to dimensions. They key with fact tables is how granular do you need to get with your data. An example for Purchases could be specific line items by product in an order or aggregated at a daily, weekly, monthly level.
My suggestion is to keep searching and studying how to design a warehouse based on your needs. Don't look to get to high levels of normalized forms. Think more about the reports you want to generate and the analysis capabilities to give your users.
Currently, I've been involved in an warehouse based intelligent transaction analysis banking system featuring customer churn behavior, fraud detection & CRM analysis. We've been using Oracle as the database & it's completely a data warehousing project with data mining algorithms used for analysis.
We have records of about 1000 customers of a bank. For modeling, whether it is better to use the star schema or snowflake schema or constellation schema? I know the basic difference of star and snowflake schema- normalization of dimension table occurs in snowflake (a.k.a. snowflaking) schema which may be problematic for joining in case of large-sized database.
So, which schema would be better for my case? Answers from experienced programmers involved in data warehousing are highly welcomed!
Thanks in advance!
In brief, my assumption going into a project like this would be that a star schema would be appropriate. I might modify that if it appeared that a dimension was getting too large to efficiently full scan and the efficiency of queries against it could be meaningfully improved by snowflaking unless that dimension joined to the fact table on a partitioning key (due to difficulties in applying partition pruning on a predicate placed on a snowflaked dimension).
I am currently trying to improve the performance of a web application. The goal of the application is to provide (real time) analytics. We have a database model that is similiar to a star schema, few fact tables and many dimensional tables. The database is running with Mysql and MyIsam engine.
The Fact table size can easily go into the upper millions and some dimension tables can also reach the millions.
Now the point is, select queries can get awfully slow if the dimension tables get joined on the fact tables and also aggretations are done. First thing that comes in mind when hearing this is, why not precalculate the data? This is not possible because the users are allowed to use several freely customizable filters.
So what I need is an all-in-one system suitable for every purpose ;) Sadly it wasn't invented yet. So I came to the idea to combine 2 existing systems. Mixing a row oriented and a column oriented database (e.g. like infinidb or infobright). Keeping the mysql MyIsam solution (for fast inserts and row based queries) and add a column oriented database (for fast aggregation operations on few columns) to it and fill it periodically (nightly) via cronjob. Problem would be when the current data (it must be real time) is queried, therefore I maybe would need to get data from both databases which can complicate things.
First tests with infinidb showed really good performance on aggregation of a few columns, so I really think this could help me speed up the application.
So the question is, is this a good idea? Has somebody maybe already done this? Maybe there is are better ways to do it.
I have no experience in column oriented databases yet and I'm also not sure how the schema of it should look like. First tests showed good performance on the same star schema like structure but also in a big table like structure.
I hope this question fits on SO.
Greenplum, which is a proprietary (but mostly free-as-in-beer) extension to PostgreSQL, supports both column-oriented and row-oriented tables with high customizable compression. Further, you can mix settings within the same table if you expect that some parts will experience heavy transactional load while others won't. E.g., you could have the most recent year be row-oriented and uncompressed, the prior year column-oriented and quicklz-compresed, and all historical years column-oriented and bz2-compressed.
Greenplum is free for use on individual servers, but if you need to scale out with its MPP features (which are its primary selling point) it does cost significant amounts of money, as they're targeting large enterprise customers.
(Disclaimer: I've dealt with Greenplum professionally, but only in the context of evaluating their software for purchase.)
As for the issue of how to set up the schema, it's hard to say much without knowing the particulars of your data, but in general having compressed column-oriented tables should make all of your intuitions about schema design go out the window.
In particular, normalization is almost never worth the effort, and you can sometimes get big gains in performance by denormalizing to borderline-comical levels of redundancy. If the data never hits disk in an uncompressed state, you might just not care that you're repeating each customer's name 40,000 times. Infobright's compression algorithms are designed specifically for this sort of application, and it's not uncommon at all to end up with 40-to-1 ratios between the logical and physical sizes of your tables.
Does anyone have resources that give a list of things to consider when designing a ROLAP cube, as opposed to MOLAP (I'm doing it in Pentaho, but I guess the principles are not dis-similar for other implementations). For example, I'm thinking of things like:
should extra transformational work be done at the ETL stage to reduce computational work when querying the cube?
should all my dimension tables be in the same database as my cube?
I'm a Pentaho implementor in Indonesia. First, of course you should try to aggregate all your measures group by surrogate keys involved.
And in Mondrian, you can "cache" some computations using additional aggregate tables. You can do it in Pentaho Aggregate Designer. But after that you will need additional work in your data warehouse / ETL stage.
Regards,
Feris
http://pentaho-en.phi-integration.com
First off - the designs are similar but they are driven by different performance & scalability strategies.
Secondly - the etl process is pretty much the same. Except - you'll typically see a lot more data in a rolap cube than a molap cube because of scalability features in relational databases. And you'll often see a rolap cube within a non-rolap database (warehouse, even transactional database) that does more than just support rolap.
Lastly, you'll typically generate aggregate table if you've got much data volume. That aggregation can be done a lot of different ways, but I'd say it is not typically driven by your ETL process unless you lack the ability to manage a separate asychronous process or have data volumes that make it impractical to run period summary jobs.
Thanks to Feris for the link and input, but in the end I went for this book:
http://www.amazon.com/Pentaho-Solutions-Business-Intelligence-Warehousing/dp/0470484322/ref=sr_1_1?ie=UTF8&s=books&qid=1258408259&sr=8-1
I had a good long look at the Mondrian site + docs, but the book seems more comprehensive.