make target as a source in ODI 12c - flow mapping - oracle

in ODI 12c a mapping can load a data from a source to the target, but sometimes there is a need in the same mapping that the target could be another source for a new target,
i.e.
Source -> target (as if source) -> target and so on...
what is the best methodology to achieve that i read about reusable mapping and lookup component but what would be the most feasible and scientific way.

You can use mapping but you should have multiple data models as your source and target.
Here is an example for two different sources and two different targets.as you can see in the shape below:
we have two sources include file technology (DM_FILE_AS_SOURCE data model) and oracle technology (DM_ORACLE_AS_SOURCE_TARGET data model) and two targets include oracle technology (DM_ORACLE_AS_SOURCE_TARGET data model) and another oracle technology (DM_ORACLE_AS_TARGET data model).
the mapping is very simple and type of "Control Append" that is working well.
hope this sample help you.

Related

How to access 12c report metadata?

I am looking for a method (or even better a DW table) that contains report properties such as Name, Description, Type, Location, etc.
I have looked through many tables but can not find this information. I am working to build out a web portal that includes hyperlinks for all reports on the server.
Here is an example of report properties I am looking for-
Unfortunately, the definitions you're looking for are not stored at the database level, which is super lame, but that's the way it is. They're stored in the RPD file and the web catalog at the OS level.
The webcatalog is located:
on 10G: OracleBIData/web/catalog/
on 11G:
$ORACLE_INSTANCE/bifoundation/OracleBIPresentationServicesComponent/catalog/
on 12c: $ORACLE_HOME\user_projects\domains\bi\bidata\service_instances\ssi\metadata\content\catalog where ssi is a service instance.
If you descend into one of those directory structures you'll see files that are named with a bunch of punctuation symbols, plus the name of the report they represent.
Reference 1
Reference 2
Just to clarify the "lame" storage: What the OP is asking for is in the presentation catalog; the RPD has nothing to do with it.
And to clarify even further: Every object stored in the presentation catalog is physically represented by two files on the disk: one file without file extension which represents the object's XML definition. And one file with an .atr extension which contains the object's properties - what the OP is looking for - as well as the object's access permissions.
Ranting's fain, but please be precise ;-)
For what it's worth, in E-Business Suite, tables start with XDO_

How to copy or split a SSAS project or datas source view

In one of the SSAS solutions that I have inherent there are two cubes using the same data source and data source view. The data source points to database A. Database A contains the tables for both cubes. The issue at hand is that both cubes represent the data from two different ERP systems. Due to no plan upfront the situations occurred that one database contains the tables for both systems. The server hosting the database is going to be decommissioned, a good moment to reorder the things for the better.
On the new server I want to create separate databases for the ERP systems which results in two different databases containing the data for the cubes.
I now want to split the SSAS project in visual studio so both cubes have their own solution. In order to keep the relational model in the data source view intact I hoped it would be possible to split or copy the view or project.
So, how can I copy or split a SSAS project or datas ource view?
If there are better alternatives to deal with this situation I am open for those as well.
Well that was easy. I just had to create a new branch in Team Foundation Server and remove all projects and attributes that I do not need.

ODI 12c LKM in mapping without IKM

in ODI 12c LKM is used in a mapping in order to load the data from the source to the staging area, but i do not need an IKM which to inset the data from the staging to the target, can ODI mapping do the first phase only that is the LKM pahse, as doing the 2 KM in my case doubles the time.
That's possible but you'd need to use LKM that is written that way.
I don't think there is one OOB there but you should be able to easily write your own.
The main thing there is in the java bean shell code (see A. Substitution API Reference) you would need to change the call from Collection Table:
…INTO TABLE <%=snpRef.getTable("L", "COLL_NAME", "W")%>
to Target Table:
…INTO TABLE <%=snpRef.getTable("L", "TARG_NAME", "A")%>
That's the main thing. You would also need to adjust the fields etc… The post here ODI - Load data directly from the Source to the Target without creating any Temporary table describes steps in more details, but once you get the idea how powerful API is you can pretty much do anything.

BIRT Scripted Data Source using existing JDBC DataSource

I know that my overall problem is generally approached using two of the more common solutions such as a join data set or a sub-table, sub-report. I have looked at those and I am not sure this will work effectively.
Background:
JDBC data source has local data which includes a series of id's that reference a record in a master data repository interfaced via a web service. This is where the need for a scripted data source arises. The data can be filtered on either attributes within the local JDBC data and/or the extended data from the web service. The complication is that my only interface is the id argument to the webservice.
Ideal Solution:
Aside from creating a reporting table or other truly desirable scenarios I am looking to creating a unified data source through a single scripting data source that will handle all the complexities. This leaves the report generation and parameter creation a bit cleaner, hopefully. The idea is to leverage the JDBC query as well as the web service queries in the scripted data source do the filtering and joins and create that singular unified view.
I tried using the following code as a reference to use the existing JDBC connection in the BIRT report definition to execute the query. However if I think my breakdown on what should be in open vs fetch given this came from beforeFactory for a completely different purpose may be giving me errors...truth is I see no errors it just returns 0 records.
a link
I have also found a code snippet to dynamically load a JDBC connection but that seems a bit obtuse and a ton of overhead for what I am needing to do. a link
In short: How in all-that-is-holy do you simply run a query against a database within a scripted data source if you wanted to do. The merit of doing that is another issue, but technically how?
Thanks in Advance!

How should business rules be implemented in ETL?

I work on an product that imports data from a mainframe using SSIS via flat file. The SSIS packages use a stage database to transform flat file data and then call stored procedures in the ODS to load the transformed data. There is a potential plan to route all ETL data through a .NET service layer (instead of directly to the ODS via stored procedures) to centralize business rules/activity, etc. I'm looking for input on this approach and dissenting opinions.
Sounds fine; you're turning basic ETL into ETVL, adding a "validate" step. Normally this is considered part of the "transform" stage, but I prefer to keep that stage purer when I conceptualize an architecture like this; transform is turning the raw fields which were pulled out and chopped up in the extract stage into objects of my domain model. Verifying that those objects are in a valid state for the system is validation.

Resources