I am working on ODI 10 project which has 153 interfaces divided in a few packages. What I want to do is create a PL/SQL procedure with INSERT statements instead of having 153 interfaces. These interfaces are more or less similar i.e they have the same source table and same target (in my case target is a Essbase Hyperion cube), the transformations & filters are different. So anytime I have to update something like a column value , I have to open 153 interfaces and update in each and every one of them. In a procedure, I could do this so easily, I can just replace all values.
So I feel that its best that I create a PL/SQL procedure, as I can maintain the code better that way.
Is there a way to convert the interface into a SQL query?. I want a direct data dump, I don't want to do an complex incremental load. I am just looking to truncate the table and load the data.
Thanks in advance.
It is possible to get the SQL code generated by ODI from the Operator in the log tables. It can also be retrieved in the repository.
Here is an example of a query for ODI 12c (10g being out of support for a long time now) :
SELECT s.sess_no, s.nno, step_name, scen_task_no, def_txt
FROM SNP_SESS_STEP s, SNP_SESS_TASK_LOG t
WHERE s.sess_no = t.sess_no
AND s.nno = t.nno;
Starting with ODI 11g, it is also possible to simulate the execution instead of doing an actual execution. This functionality will just display the code generated in ODI Studio instead of running it.
Finally, upgrading to a more recent of ODI would allow to use the ODI SDK. With it you could programmatically do changes to all the mappings in one go. Reusable mappings could also help as it sounds that some logic is implemented multiple times. That would enable to ease these kind of changes while keeping the benefits of an ELT tool (scheduling, monitoring, visual representation of flows, cross-technology, ...).
Disclaimer : I'm an Oracle employee
Related
May I have your opinion on below queries please:
Option 1:
I have select script handy with me which fetch data by joining many source tables and performs some transformations like aggregations (group by), data conversion, sub-string etc.
Can I invoke this script through ODI mapping and return results (transformed data output) can be inserted into target of ODI mapping ?
Option2:
Convert the select script into equivalent ODI mapping by using equivalent ODI transformations , functions , look ups etc and use various tables (tables in join clause) as source of mappings.
Basically develop ODI mapping which is equivalent to provided select script plus a target table to insert records into it.
I need to know pros and cons of both options in above (if option 1 is possible).
Is it still possible to track transformation errors, join source tables and where clause condition related errors etc through ODI with option 1?
Log file for mapping failure will have as granular level details as offered by option 2?
Can I still enable Flow Control at Knowledge Module and redirect select script errors into E$_ error tables provided by ODI?
Thanks,
Rajneesh
Option 1: ODI 12c includes that concept out of the box. On the physical tab of a mapping, click on the source node (datastore). Then in the properties pane, there is the CUSTOM_TEMPLATE option under "Extract Options" menu. This allows to enter a custom SQL statement that will be used instead of the code generated by ODI.
However it is probably less maintainable over time than option 2. SQL is less visual than mapping components. Also if you need to bulk change it, it will be trickier. Changing a component in several mappings can be done with the SDK. Changing SQL code would require to parse it. You might indeed have less information in your operator logs as the SQL would be seen as just one block of code. It also wouldn't provide any lineage.
I believe using Flow Control would work but I haven't tested it.
Option 2 would take more time to complete but with that you would benefit from all the functionalities of ODI.
My own preference would be to occasionally use option 1 for really complex SQL queries but to use option 2 for most of the normal use cases.
is there any way we can update the values in BIRT report which in-turn will update the database ? We need to present a report generated in Microsoft SQL server to the client , we tried providing the report in excel however our client changes the format and it is difficult to again consume it in our proprietary tool
(which is Microsoft SQL based). Is there any way we can achieve this? Client should update the values in the report and it should get reflected in the DB
while it's possible to write to wrtie back to db from BIRT using a servlet (see Eclipse Community Forum) I don't know of a way how BIRT could track the changed values.
While it's difficult to campare excel files it should be simpler to create csv files from these excel files and comparing the csv files independant of excel formating changes.
I see the gattering of value changes and writing back to the db as an independant separate workflow not related to the reporting.
Reporting tools are made for generating output only.
A general automatism concept is impossible, if you think about it from a more abstract point of view:
There's data D in the data base (usually spread accross several tables T1, ..., Tn, and records R1, ..., Rm).
The report output data O = (o1, o2, ...) is a the result of a more or less complex (the opposite of trivial) function f(R1, ..., Rm).
An automatic back-propagation automatism of any kind like you dream of would have to know what changing the value of o1 from "spam" to "eggs" means for R1, ..., Rm.
... Or even for records which were not selected by f, for example if the user changed the value of a primary key column.
This is only possible if the function f is bijective (I don't know if the english word is correct), but usually f isn't bijective. Even if it is, the task of inverting a non-trivial function is very hard.
Thus, if you want to let the user change values and persist the changes inside the DB, you need some kind of database UI or some kind of import interface.
Depending on your database, it might be as trivial as let the user work with Oracle SQL*developer or similar tools which support importing data from excel sheets.
However, these tools are intended for SQL developers, as the name implies.
OTOH, if all you want is to perform DML statements in BIRT, this is possible indirectly: You can write stored procedures in the database doing the DML work, and call these procedures from BIRT (use a JDBC Stored Procedure Query instead of JDBC SQL Select Query).
in ODI 12c LKM is used in a mapping in order to load the data from the source to the staging area, but i do not need an IKM which to inset the data from the staging to the target, can ODI mapping do the first phase only that is the LKM pahse, as doing the 2 KM in my case doubles the time.
That's possible but you'd need to use LKM that is written that way.
I don't think there is one OOB there but you should be able to easily write your own.
The main thing there is in the java bean shell code (see A. Substitution API Reference) you would need to change the call from Collection Table:
…INTO TABLE <%=snpRef.getTable("L", "COLL_NAME", "W")%>
to Target Table:
…INTO TABLE <%=snpRef.getTable("L", "TARG_NAME", "A")%>
That's the main thing. You would also need to adjust the fields etc… The post here ODI - Load data directly from the Source to the Target without creating any Temporary table describes steps in more details, but once you get the idea how powerful API is you can pretty much do anything.
As an organization we are moving towards the purchase of ODI as an ELT tool.
We have plenty of PLSQL resource but I have heard ODI is powerful enough at data manipulation to replace much of what was previously done in PLSQL.
What are its strengths? And weaknesses?
And can it completely do away with coding the data transformation in PLSQL?
No, it doesn't however you might be 99% correct here.
It's actually a tricky question as PL/SQL might be submitted by ODI too.
I would reserve it (PL/SQL) for defining functions/procedures (if you REALLY need to) to be later called by ODI.
This should NEVER be something immediately related to ETL like INSERT INTO … SELECT … FROM … - that's where ODI fits the bill perfectly.
The only justified cases, I came across during my ODI experience (9yrs) were:
- creating PL/SQL function to authenticate (and later authorize through OBIEE) an LDAP/AD user
- creating helper functions to be later called by ODI DQ(CKM) modules like is_number, is_date
- creating XML files directly by DB (even with never ODI XML driver you might still find it's best to use native DB XML API/functionality to produce XML) - for performance reasons. Other direct file operations (load/unload) could be done in this way.
- creating my own (optimized) hierarchy traversal query for performance reasons (beaten the standard Oracle SQL 'Recursive Subquery Factoring' feature to about 1000:1)
It's up to you if you want to make a reusable piece of logic by using PL/SQL and call it from ODI or code it from ODI directly (in the PL/SQL form)
I'm learning how to implement change data capture in oracle. However, not being a DB specialist but rather a DEV, i find the process tedious with respect to other things that i have to do. I end up doing it because my DBA/DEVOP don't want to take care of it.
Hence i was wondering if there is any tool that can help set oracle change data capture. In other words a simple graphical user interface that would write the all code for me. Creation of change table, PL/SQL Script and etc....
Many thanks
topic duplicated in: dba.stackexchange
What problem are you trying to solve?
How (when) will the CDC data be consumed
Are you planning to use something akin to: Oracle 11.1 CDC doc
Be sure to heed: Oracle 11.2 CDC Warning
"Oracle Change Data Capture will be de-supported in a future release of Oracle Database and will be replaced with Oracle GoldenGate. Therefore, Oracle strongly recommends that you use Oracle GoldenGate for new applications."
The company I work for, Attunity, has a pretty slick GUI CDC tool called "Replicate".
It can directly apply changes to a selected target DB, or store changes to be applies.
Many sources (Oracle, SQLserver, DB2...) many targets (Oracle. SQLserver, Netezza, Vertica,...)
Define your source and target DB, Search/Select source table, and one click to go.
Optional transformations such as: table and column names, drop and add columns, calculate values.
Regards,
Hein.