OBIEE Admin Tool - parent-child hiearchy not working properly - oracle

I followed official documentation while creating Parent/Child hierarchy in OBIEE Admin tool ...
I already have working EMP_MANAGER mapping table with structure (ancestor_id, member_id, distance, is_leaf)
I have employee (sales rep), parent_child (sales rep parent child), position (sales rep position) tables ... together with Sales_data (revenue) table
I connected them as it was in documentation (Physical layer)
Sales_data (inner join)-> Parent_child (inner join)-> Employee (inner)-> Position
Then I dragged Sales_data and Employee into Business Model with the connection
Sales_data (left outer join)-> Employee
Same as it was in docs (I've just changed inner join to left outer join)
If I, in OBI Answers, drag the hierarchy in compound layer - I can drill down to the leaf w/o any problem ... works perfectly
But when I try to add, for example, number of active contracts ... It will change to PIVOT TABLE with no results (all I can do is undo last action)
If I don't drill down and let there only the TOP employees I will get the table but with NULL in "Active contracts" .. instead of SUM for all subordinates for the particular employees
Do you know where could be the problem?

Related

indexed view vs temp table to improve performance of a seldom executed query

i have a slow query whose structure is
select
fields
from
table
join
manytables
join (select fields from tables) as V1 on V1 on V1.field = ....
join (select fields1 from othertables) as V2
join (select fields2 from moretables) as V3
The select subqueries in the last 3 joins are relatively simple but joins agains the, take time. If they were phisical tables it would be much better.
So i found out that i could turn the subqueries to indexed views or to temp tables.
By temp table i do not mean a table who is written hourly like explained here,
but a temp table who is created before the query execution
Now my doubt comes from the fact that indexed views are ok in datawarehouses since they impact the performance. This db is not a datawarehouse but a production db of a non data intense application.
But in my case the above query is executed not often, even if the underlaying tables (the tables whose data would become part of the indexed view) are used more often.
In this case is it ok to use indexed views? Or shuold i favor temp table?
Also table variable with primary key keyword is an alternative.

How to improve deletion times in Oracle for a self-referencing table

In our Oracle 11g database we have a table, that has a primary key I_Node (int) and also a column called I_Parent_Node (int) that references back to another record in the same table. The root node has I_Parent_Node = null. In this way we form a tree structure of nodes, leaves, branches, whatever you want to call them.
Frequently we need to delete an entire branch of nodes at once, meaning a node and all of its children. At times this is many, many records, say 50,000 or more. Since a cascade delete is not allowed on a self-referencing table, we are forced to delete one by one starting with the leaves and working our way back up the tree. We have experienced hours-long delete times.
We are considering doing a "marking for deletion" technique, where a separate program would clean out the nodes marked for deletion during off-peak hours, but I am interested in whether a database design change or some other Oracle construct could help out here. I am not trained in Oracle aside from what I've learned on the job, and the people who created the database did not have such large quantities in mind. I am open to database design changes since it is not yet a fixed design.
You may want to consider separating the hierarchy structure from the main table. So you main table would just have primary ids (lets call it "ID"), and your hierarchy table would have "ID, ParentID, TreeID". ParentID is that ID's parent node, and TreeID is the highest parent in the tree (level 1).
So, a level 1 node would look like:
ID, ParentID, TreeID
1, [null], 1
A level 2 node would look like:
ID, ParentID, TreeID
2, 1, 1
A level 3 node would look like:
ID, ParentID, TreeID
3, 2, 1
And so on.
You would use Oracle hierarchy queries (Connect by queries) to query or traverse the trees. This table will be very thin (not many columns, these 3 + some modified dates maybe), so updating these relationships should be much faster and scale better than messing with the main table.
You should be able to do this with deferrable constraints and a hierarchical query.
If your foreign key constraint (on I_Parent_Node) is not already deferrable, drop it and recreate it with the keyword "DEFERRABLE".
Here's an example using the EMPLOYEES table from Oracle's examples (I modified the DEPARTMENTS table too so that this would execute, that's really not needed for an example though):
Drop & Recreate your foreign key if it's not currently deferrable:
alter table employees drop constraint emp_manager_fk;
alter table employees add constraint emp_manager_fk foreign key (manager_id) references employees(employee_id) deferrable;
In your transaction, defer your contraints, and delete using a hierarchical query:
set constraints all deferred;
delete
from employees e
where employee_id in (select employee_id
from employees
start with employee_id = 108
connect by prior employee_id = manager_id);
The "108" is the ID of my "parent" record.
I assume you've already done standard tuning - i.e. are the node and parent node ID columns suitable indexed?
(1) One approach to the problem is to use PL/SQL. Bulk collect the IDs to be deleted, using a hierarchical query that returns the leaf rows first, into an array; then do a bulk delete (FORALL) using the array.
(2) Another approach is a soft-delete - mark the rows as "deleted", but never actually delete them. You would need to modify your application (or use Oracle VPD to automatically omit the "deleted" rows from queries). This might work reasonably well if deleting a node is relatively rare; but if you're routinely deleting lots of nodes then this would clutter the table with a lot of old data.

Linq to Entity Query .Expand

I got the following tables
TableA, TableB, TableC, TableD, TableE and they have foreign key relations like
FK_AB(one to many),FK_BC(one to one),FK_CD(One to many),FK_DE(one to one) and have the navigation properties based on these foreignkeys
Now I want to query TableA and get the records from TableA, TableD and TableE whoose Loadedby column equal to System. My query is like below
var query= from A in Context.TableA.Expand(TableB/TableC/TableD).Expand(TableB/TableC/TableD/TableE)
where A.Loadedby=="System"
select A;
The above query is working fine. I want the records from TableD and TableE whoose Loadedby value equal to System but the above query returning all the records from TableD and TableE which are related to TableA record satisfying A.Loadedby="System" this condition is not checked in the child tables.
Can anyone tell me how to filter the child tables also.
Currently OData only supports filters on the top-level. So in the above example it can only filter rows from the TableA. Inside expansions all the approriate rows will be included, always, there's no way to filter those right now.
You might be able to ask for the exanded entities separately with additional queries (with the right filter) and possibly use batch to group all the queries in one request. But that depends on the actual query you need to send.

DB project - improving performance with relationships

I have two tables, let's call them TableA and TableB. One record in TableA is related to one or more in TableB. But there's also one special record within them in TableB for each record from TableA (for example with lowest ID), and I want to have quick access to that special one. Data from both tables aren't deleted - it's a kind of history rarely cleared. How do that the best in terms of performance?
I thought of:
1) two-way relationship, but it will affect insert performance
2) design next table, with primary key as FK_TableA (for TableA record exactly one is "special") and second column FK_TableB and then create view
3) design next table, with primary key as FK_TableA, FK_TableB, make FK_TableA unique and then create view
I'm open for all other ideas :)
4) I'd consider an indexed view to hide the JOIN and row restriction
This is similar to your options 2 and 3 but the DB engine will maintain it for you. With a new table you'll either compromise data integrity or have to manage the data via triggers

Do views only perform the joins that they need to, or all joins always?

I am on an oracle DB. Lets say I have one view that joins to three tables. The view has two fields each. Each field only needs data from two of the three tables.
If I query the view and return only one field, does the view still join to three tables or just to the two tables that it needs to calculate the field?
Generally it will have to hit the three tables.
Consider
SELECT A.VAL, B.VAL, C.VAL FROM A JOIN B ON A.ID = B.ID JOIN C ON A.ID = C.ID
It is possible that a single ID in "A" to have zero, 1 or multiple matches in either B or C. If table "C" were empty, the view would never return a row, so even just querying A.VAL or B.VAL, it would still need to see if there was a corresponding row in "C".
The exception is when, because of an enforced referential integrity constraint, the optimizer knows that a row in 'B' will always have a parent row in 'A'. In that case, a select of B.VAL would not need to actually check the existence of the parent row in 'A'. This is demonstrated by this article
That likely depends on the type of join being used. If they are all inner joins, it will definitely need to examine all three tables.
In general, the database engine would join all three tables to ensure it got the right answer.
Oracle will sometimes eleminate one of the tables where this does not change the result.
This can be done if:-
There is a foreign key constraint to the table to be eleminated (i.e. a row in the table
can be guaranteed to be found)
The table is otherwise unused. i.e. not selected from, in the where clause, etc.

Resources