OBIEE RPD: Snowflaked table not showing up as a fact table in the BMM layer - oracle

I am using the RPD tool to create a data model for a database which has 6 tables.
Dimensions : ProductFamily, ProductionLine,Company,CurrencyExchange
Facts: WorkOrderStats, WorkOrder
But, WorkOrder is not getting taken as a fact table in the BMM layer.
How do I make the WorkOrder as a fact table in the BMM layer?
Any input is sincerely appreciated.
Thank you

That's not how facts work in OA. Facts never join to facts. Ping me on slack. I created that exact RPD last Friday.

If you want to join your fact table to an other fact table you have to create an other alias for your fact, the new alias will act as a dimension. in your case create an alias for F_workorder name it Dim_workorder for exemple and join your F_workorderStat to this dimension and keep F_workorder and it's relations just remove the relation between F_workorderStat and F_workorder it's not going to work that way.

Related

functionality of the TABLE clause in oracle

i'm struggling to understand what the TABLE clause does, per oracle docs:
it transforms a collection like a nested table into a table which could be used in an sql statement.
which seems clear enough but i don't know how it works in practice.
these are the relevant types and tables;
create type movies_type as Table of ref movie_type;
create type actor_type under person_type
(
starring movies_type
) Final;
create table actor of actor_type
NESTED TABLE starring STORE AS starring_nt;
i want to list actors and movies they starred in, this works
select firstname, lastname, value(b).title
from actor2 a, table(a.starring) b;
but i don't understand why. why isn't this
actor2 a, table(a.starring) b
a Cartesian product?
also, why does value(b) work here?, since it's table of refs, it should be deref, but that doesn't work.
my question is:
why does this query work as intended? i would expect it to list every actor with every movie (Cartesian product) as there are no specified conditions on how to join, and why does value(b) work here?, since it's table of refs, it should be deref, but that doesn't work.
i don't have a mental model for oracle sql, help is very much appreciated on how to learn properly.
thank you very much.
It’s not a Cartesian product because table(a.starring) is correlated by a: For each row in a it is running the table function against its starring nested table.
This is not a very common way of data modelling in Oracle, usually you would use a junction table to allow for a properly normalised model (which usually is much easier to query and allows for better for performance)

Change 1:M relationship to M:M in an Oracle database

I have a question about how to change relationships between tales in an Oracle database while preserving existing data.
Let's say I want to represent People and Employers such that each person works for a single employer. I do this with a PERSON table and an EMPLOYER table with a 1:M relationship of EMPLOYER to PERSON. The PERSON table had columns ID, NAME, and EMPLOYER_ID, and the EMPLOYER table had columns ID, NAME, AND LOCATION.
If I wanted to update this schema so a PERSON can work for more than one EMPLOYER, I could add a PERSON_EMPLOYER table with columns for each ID.
Could anyone give some pointers on the most sensible way to do this and move my existing data in? I think I can add a join table, but I'm not sure how to populate it with existing employer:employer data. After that I guess I remove EMPLOYER_ID column from PERSON.
Should I just be backing up the database and doing this operation in a script?
Thank you so much.
Having a backup is always a good idea.
But in my opinion transferring data from one table to another it is quite reliable operation. You do not necessary make a script, just do it step by step and check changes.
Create a new PERSON_EMPLOYER table
Copy existing data to PERSON_EMPLOYER table.
COMMIT data changes.
Check data in PERSON_EMPLOYER table
Drop EMPLOYER_ID column from PERSON table. (No need to remove the column immediately, it can be done later when you will be sure that everything is fine with the your data.)
For transferring data from PERSON table to PERSON_EMPLOYER table you can use simple INSERT
INSERT INTO employer_person (employer_id, person_id)
SELECT employer_id, person_id FROM person;
Do not forget COMMIT this operation!

SQL Loader, Trigger saturation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a situation i can't find an explanation for, here it is. (I'll use hypothetical info since the original are really big.)
I have a table, let's say:
table_a
-------------
name
last name
dept
status
notes
And this table has a trigger on insert, which does a lot of validation to the info to change the status field of the new record according to the results of the validation, some of the validations are:
- check for the name existing in a dictionary
- check for the last name existing in a dictionary
- check that fields (name,last name,dept) aren't already inserted in table_b
- ... and so on
The thing is, if I do an insert on the table via query, like
insert into table_a
(name,last_name,dept,status,notes)
values
('john','smith',1,0,'new');
it takes only 173 ms to do all the validation process, update the status field and insert the record in the table. (the validation process does all the searches via indexes)
But if I try this via SQLloader, reading a file with 5000 records, it takes like 40 minutes to validate and insert 149 records (of course i killed it...)
So I tried loading the data disabling the trigger (to check speed)
and I got that it loads like all the records in less than 10 seconds.
So my question is, what can I do to improve this process? My only theory is that I could be saturating the database because it loads so fast and launches many instances of the trigger, but I really don't know.
My objective is to load around 60 files with info and validate them through the process in the trigger (willing to try other options though).
I would really appreciatte any help you can provide!!
COMPLEMENT---------------------------------------------------------------------------------
Thanks for the answer, now i'll read all about this, now hope you can help me with this part. let me explain some of the functionality i need (and i used a trigger cause i couldn't think of anything else)
so the table data comes with this (important) fields:
pid name lastname birthdate dept cicle notes
the data comes like this
name lastname birthdate dept
now, the trigger does this to the data:
Calls a function to calculate the pid (is calculated based on the name, lastname and birthdate with an algorithm)
Calls a function to check for the names on the dictionary (thats because in my dictionary i have single names, meaning if a person is named john aaron smith jones the function splits john aaron in two, and searches for john and aaron in the dictionary in separate querys, thats why i didn't use a foreign key [to avoid having lots of combinations john aaron,john alan,john pierce..etc]) --->kinda stuck on how to implement this one with keys without changing the dictionary...maybe with a CHECK?, the lastname foreign key would be good idea.
Gets the cicle from another table according to the dept and the current date (because a person can appear twice in the table in the same dept but in different cicle) --->how could i get this cicle value in a more efficient way to do the correct search?
And finally, after all this validation is done, i need to know exactly which validation wasn't met (thus the field notes) so the trigger concatenates all the strings of failed validations, like this:
lastname not in dictionary, cannot calculate pid (invalid date), name not in dictionary
i know that if the constraint check isn't met all i could do is insert the record in another table with the constraint-failed error message, but that only leaves me with one validation, am i right? but i need to validate all of them and send the report to other department so they can review the data and make all the necessary adjustments to it.
Anyway, this is my situation right now, i'll explore possibilities and hope you can share some light on the overall process, Thank you very much for your time.
You're halfway to the solution already:
"So I tried loading the data disabling the trigger (to check speed) ... it loads like all the records in less than 10 seconds."
This is not a surprise. Your current implementation executes a lot of single row SELECT statements for each row you insert into table B. That will inevitably give you a poor performance profile. SQL is a set-based language and performs better with multi-row operations.
So, what you need to do is find a way to replace all the SELECT statements which more efficient alternatives. Then you'll be able to drop the triggers permanently. For instance, replace the look-ups on the dictionary with foreign keys between the table A columns and the reference table. Relational integrity constraints, being internal Oracle code, perform much better than any code we can write (and work in multi-user environments too).
The rule about not inserting into table A if a combination of columns already exists in table B is more problematic. Not because it's hard to do but because it sounds like poor relational design. If you don't want to load records in table A when they already exits in table B why aren't you loading into table B directly? Or perhaps you have a sub-set of columns which should be extracted from table A and table B and formed into table C (which would have foreign key relationships with A and B)?
Anyway, leaving that to one side, you can do this with set-based SQL by replacing SQL*Loader with an external table. An external table allows us to present a CSV file to the database as if it were a regular table. This means we can use it in normal SQL statements. Find out more.
So, with foreign key constraints on dictionary and an external table you can replace teh SQL Loader code with this statement (subject to whatever other rules are subsumed into "...and so on"):
insert into table_a
select ext.*
from external_table ext
left outer join table_b b
on (ext.name = b.name and ext.last_name = b.last_name and ext.dept=b.dept)
where b.name is null
log errors into err_table_a ('load_fail') ;
This employs the DML error logging syntax to capture constraint errors for all rows in a set-based fashion. Find out more. It won't raise exceptions for rows which already exist in table B. You could either use the multi-table INSERT ALL to route rows into an overflow table or use a MINUS set operation after the event to find rows in the external table which aren't in table A. Depends on your end goal and how you need to report things.
Perhaps a more complex answer than you were expecting. Oracle SQL is a very extensive SQL implementation, with a lot of functionality for improving the efficient of bulk operations. It really pays us to read the Concepts Guide and the SQL Reference to find out just how much we can do with Oracle.

LINQ: I have 3 tables and I want to make 2 join tables

I have three tables, Question, SubjectType and CoreValue.
Question table has many to many association to SubjectType table and CoreValue table.
I want to make 2 join tables: one between Question and SubjectType and one between Question and CoreValue.
How do I make sure that the Association tables with CoreValue FK and Question FK gets filled without inserting any values in Corevalue? CoreValue Table already have the values that is needed. I just need to be able to have FK on Question and Corevalue in same association table without inserting any data same goes to Question and SubjectType.
Thanks for advice!
Best Regards!
Just create the tables as pure join tables in the database.
EF will generate a model with navigation properties (Question - SubjectTypes, and so on). You probably want to remove the associations SubjectType.Questions and CoreValue.Questions.
See also this tutorial (the Class-Student part).

Can I create an Oracle view that automatically checks for new monthly tables?

I'm wondering if its possible to create a view that automatically checks if there is a new monthly created table and if there is include that one?
We have a new table created each month and each one ends with the number of the month, like
table for January: table_1
table for February: table_2
etc...
Is it possible to create a view that takes data from all those tables and also finds when there is a new one created?
No, a view's definition is static. You would have to replace the view each month with a new copy that included the new table; you could write a dynamic PL/SQL program to do this. Or you could create all the empty tables now and include them all in the view definition; if necessary you could postpone granting any INSERT access to the future tables until they become "live".
But really, this model is flawed - see Michael Pakhantsov's answer for a better alternative - or just have one simple table with a MONTH column.
Will be possible if you instead of creating new table each month will create new partition for existing table.
UPDATE:
If you have oracle SE without partitioning option you can create two tables: LiveTable and ArchiveTable. Then each month you need move rows from Live to ArchiveTable and clean live table. In this case you need create view just from two tables.
Another option is to create the tables in another schema with grants to the relevant user and create public synonyms to them.
As the monthly tables get created in the local schema, they'll "out-precedence" the public synonyms and the view will pick them up. It will still get invalidated and need recompiling, but the actual view text should need changing, which may be simpler from a code-control point of view.
You can write a procedure or function that looks at USER_TABLES or ALL_TABLES to determine if a table exists, generate dynamic sql, and return a ref cursor with the data. The same can be done with a pipelined function.

Resources