ODI 12c integration knowledge module merge when using reusable mapping - oracle

Is there away to use the integration knowledge model Merge when using reusable mapping?
I am able to use the Merge IKM when just using datastores but when I added a reusable mapping the option no longer appears on the IKM drop down.

On the logical tab of your mapping, click on the target datastore. Under the Target tab in the properties pane, change the Integration Type and set it to Incremental Update or None. Then go back in the Physical design and the Merge IKMs should be available in the drop-down.
The Integration Type is a pre-filter for the loading strategies. I personally prefer to set it to None so it doesn't filter out any KM and I can have two different Physical design for the same mapping :
the initial load with an Insert Append IKM and truncate set to true.
the incremental load with a Merge IKM

Related

Invoking an select script through ODI (Oracle Data Integrator)

May I have your opinion on below queries please:
Option 1:
I have select script handy with me which fetch data by joining many source tables and performs some transformations like aggregations (group by), data conversion, sub-string etc.
Can I invoke this script through ODI mapping and return results (transformed data output) can be inserted into target of ODI mapping ?
Option2:
Convert the select script into equivalent ODI mapping by using equivalent ODI transformations , functions , look ups etc and use various tables (tables in join clause) as source of mappings.
Basically develop ODI mapping which is equivalent to provided select script plus a target table to insert records into it.
I need to know pros and cons of both options in above (if option 1 is possible).
Is it still possible to track transformation errors, join source tables and where clause condition related errors etc through ODI with option 1?
Log file for mapping failure will have as granular level details as offered by option 2?
Can I still enable Flow Control at Knowledge Module and redirect select script errors into E$_ error tables provided by ODI?
Thanks,
Rajneesh
Option 1: ODI 12c includes that concept out of the box. On the physical tab of a mapping, click on the source node (datastore). Then in the properties pane, there is the CUSTOM_TEMPLATE option under "Extract Options" menu. This allows to enter a custom SQL statement that will be used instead of the code generated by ODI.
However it is probably less maintainable over time than option 2. SQL is less visual than mapping components. Also if you need to bulk change it, it will be trickier. Changing a component in several mappings can be done with the SDK. Changing SQL code would require to parse it. You might indeed have less information in your operator logs as the SQL would be seen as just one block of code. It also wouldn't provide any lineage.
I believe using Flow Control would work but I haven't tested it.
Option 2 would take more time to complete but with that you would benefit from all the functionalities of ODI.
My own preference would be to occasionally use option 1 for really complex SQL queries but to use option 2 for most of the normal use cases.

How to convert a ODI interface into an SQL query?

I am working on ODI 10 project which has 153 interfaces divided in a few packages. What I want to do is create a PL/SQL procedure with INSERT statements instead of having 153 interfaces. These interfaces are more or less similar i.e they have the same source table and same target (in my case target is a Essbase Hyperion cube), the transformations & filters are different. So anytime I have to update something like a column value , I have to open 153 interfaces and update in each and every one of them. In a procedure, I could do this so easily, I can just replace all values.
So I feel that its best that I create a PL/SQL procedure, as I can maintain the code better that way.
Is there a way to convert the interface into a SQL query?. I want a direct data dump, I don't want to do an complex incremental load. I am just looking to truncate the table and load the data.
Thanks in advance.
It is possible to get the SQL code generated by ODI from the Operator in the log tables. It can also be retrieved in the repository.
Here is an example of a query for ODI 12c (10g being out of support for a long time now) :
SELECT s.sess_no, s.nno, step_name, scen_task_no, def_txt
FROM SNP_SESS_STEP s, SNP_SESS_TASK_LOG t
WHERE s.sess_no = t.sess_no
AND s.nno = t.nno;
Starting with ODI 11g, it is also possible to simulate the execution instead of doing an actual execution. This functionality will just display the code generated in ODI Studio instead of running it.
Finally, upgrading to a more recent of ODI would allow to use the ODI SDK. With it you could programmatically do changes to all the mappings in one go. Reusable mappings could also help as it sounds that some logic is implemented multiple times. That would enable to ease these kind of changes while keeping the benefits of an ELT tool (scheduling, monitoring, visual representation of flows, cross-technology, ...).
Disclaimer : I'm an Oracle employee

ODI 12c LKM in mapping without IKM

in ODI 12c LKM is used in a mapping in order to load the data from the source to the staging area, but i do not need an IKM which to inset the data from the staging to the target, can ODI mapping do the first phase only that is the LKM pahse, as doing the 2 KM in my case doubles the time.
That's possible but you'd need to use LKM that is written that way.
I don't think there is one OOB there but you should be able to easily write your own.
The main thing there is in the java bean shell code (see A. Substitution API Reference) you would need to change the call from Collection Table:
…INTO TABLE <%=snpRef.getTable("L", "COLL_NAME", "W")%>
to Target Table:
…INTO TABLE <%=snpRef.getTable("L", "TARG_NAME", "A")%>
That's the main thing. You would also need to adjust the fields etc… The post here ODI - Load data directly from the Source to the Target without creating any Temporary table describes steps in more details, but once you get the idea how powerful API is you can pretty much do anything.

Flexible hierarchies in Saiku Analytics

I have just started working on Mondrian and I am having a hard time understanding how to set hierarchies work.
Suppose that I have a Hospital dimension and I want to sum the amount of hospitals that are public or private in certain state. I have also my fact hospital with the appropriate measure hospital_amount.
The hierarchy I have built in the Schema Workbench is show below:
1- State
2- Flag (Private or Public)
3- City
4- Hospital
Doing in this way I can analyse things in Saiku Analytics plugin without major concerns, provided that I maintain the presentation order of attributes (State, Flag, City,...). But, things turn a little complicated if I want change the order that fields will be presented in the report, in other words, what if I want to build another report in Saiku without using the flag attribute.
Even if I hide the flag, Saiku will continue using it to categorize the rest of the attributes from the hierarchy (City and Hospital).
Some people said that I need to create another hierarchy in the Schema Workbench only for the flag, but this won't let me use the flag in the drill menu of Hospital.
Is there any way to build reports in Saiku without being stuck into the hierarchy order, I mean choosing fields from hierarchy in a flexible way?
Thanks in advance!
You don't mention if you are using Saiku as a BI server plugin or on standalone.
If you are using standalone, which uses Mondrian 4, you can use the "has hierarchy" attribute in your schema instead of defining a strict hierarchy which effectively creates a hierarchy for each level, which can all act independently of one another.
Or in Mondrian 3 you could just do that manually.

BIRT Scripted Data Source using existing JDBC DataSource

I know that my overall problem is generally approached using two of the more common solutions such as a join data set or a sub-table, sub-report. I have looked at those and I am not sure this will work effectively.
Background:
JDBC data source has local data which includes a series of id's that reference a record in a master data repository interfaced via a web service. This is where the need for a scripted data source arises. The data can be filtered on either attributes within the local JDBC data and/or the extended data from the web service. The complication is that my only interface is the id argument to the webservice.
Ideal Solution:
Aside from creating a reporting table or other truly desirable scenarios I am looking to creating a unified data source through a single scripting data source that will handle all the complexities. This leaves the report generation and parameter creation a bit cleaner, hopefully. The idea is to leverage the JDBC query as well as the web service queries in the scripted data source do the filtering and joins and create that singular unified view.
I tried using the following code as a reference to use the existing JDBC connection in the BIRT report definition to execute the query. However if I think my breakdown on what should be in open vs fetch given this came from beforeFactory for a completely different purpose may be giving me errors...truth is I see no errors it just returns 0 records.
a link
I have also found a code snippet to dynamically load a JDBC connection but that seems a bit obtuse and a ton of overhead for what I am needing to do. a link
In short: How in all-that-is-holy do you simply run a query against a database within a scripted data source if you wanted to do. The merit of doing that is another issue, but technically how?
Thanks in Advance!

Resources