Oracle ODI: Creating Tables with Dynamic Names using a Knowledge Module - oracle

I need to create auditing tables in Oracle using a Loading Knowledge Module (LKM).
Knowledge Modules typically create various tables, triggers and views which are named dynamically, e.g.: C$_tablename, J$_tablename, T$_tablename, JV$_tablename, etc. etc.
I would like to do something similar for my auditing tables, i.e. all audit tables would be called "tablename_audit", but do not how to set this up in the LKM code.
As an example, the following LKM code is used to create a C$ work table:
create table <%=odiRef.getTable("L", "COLL_NAME", "A")%>
(
<%=odiRef.getColList("", "[CX_COL_NAME]\t[DEST_WRI_DT] NULL", ",\n\t", "","")%>
)
And the following IKM code creates an I$ flow table:
create table <%=odiRef.getTable("L", "INT_NAME", "W")%>
(
<%=odiRef.getColList("", "[COL_NAME]\t[DEST_WRI_DT] NULL", ",\n\t", "", "")%>
,IND_UPDATE char(1)
)
INT_NAME and COLL_NAME seem to be constants defined in the Substitution API, as specified here.
So, how can I use the knowledge module to create similar tables with dynamic names in an Oracle Database?
Thank you.

I managed to solve this as follows.
<%=odiRef.getTable("L", "TARG_NAME", "A")%> returns the target table name in DATABASE."tablename" format.
I therefore added a step to my LKM which uses Jython to extract the tablename from the DATABASE."tablename" string, append "_audit" to the table name, and then save the DATABASE."tablename_audit" string to a Jython variable, as shown below.
targTableName = '<%=odiRef.getTable("L", "TARG_NAME", "A")%>'
splitStr = targTableName.split('"')
JYTHON_AUDIT_TABLE = splitStr[0] + '\"' + splitStr[1] + '_audit\"'
However, since Jython variables cannot be used from SQL scripts in ODI, I added another step to my LKM which fetches the Jython variable into a Java variable (which can then be used by ODI objects) using ODI Expert's Jython to Java API.
import api.getInfo as info;
info.setJythonVariable(JYTHON_AUDIT_TABLE);
<#
import api.getInfo;
String JAVA_AUDIT_TABLE = getInfo.getJythonVariable();
#>
I could then easily use the JAVA_AUDIT_TABLE variable in my SQL scripts to create the required tables in my target database.
create table <#=JAVA_AUDIT_TABLE#>...
insert into <#=JAVA_AUDIT_TABLE#>...

I am in a similar situation but with columns. I followed your steps and I am running into error when setting up the java variable. I might not be using the correct technology or doing something stupid.
I used Java bean shell technology in command on source and used the code snippet you provided.
It would be really helpful if you could you please provide more information like technology you used in command on source and target.

Related

FireDAC "table or view does not exist" when insert into ORACLE TABLE Delphi Belin 10.1 upd 2

We are migrating our codebase from Delphi XE3 with FireDAC 8.0.5 to Delphi Berlin 10.1 Upd 2 with FireDAC 15.0.1 (Build 86746). Everything is working smoothly using MS Sql Server, but using ORACLE it has been another history.
Throughout the application source code we use lots of TAdQuery with sql instructions like
AdQuery1.Sql.Text := 'SELECT FIELD1, FIELD2 FROM TABLE1';
In order to insert a record, we use Append or Insert methods, like this
AdQuery1.Insert;
//or
AdQuery1.Append;
Just after invoking its Post method, the component internally creates an INSERT sql statement, that goes like this
INSERT INTO TABLE1 (FIELD1, FIELD2) VALUES(:FIELD1, :FIELD2)
So the record gets inserted successfully.
Now, using TFdQuery in Delphi Berlin, the component internally creates an INSERT sql statement, like this
INSERT INTO USERNAME.TABLE1 (FIELD1, FIELD2) VALUES(:FIELD1, :FIELD2)
Failing with a [FireDAC][Phys][Ora] ORA-00942: table or view does not exist
This happens because in our Oracle database, TABLE1 is created in a schema called MAIN_SCHEMA, and we access it by using a public synonym.
Trying to find a workaround, we compared FireDAC source code, finding that
in Delphi XE3, the unit uADDAptManager.pas, on its function TADDAptTableAdapter.GetUpdateRowCommand, calls oConn.CreateCommandGenerator(oCmdGen, nil);
in Delphi Berlin, the unit FireDAC.DApt.pas, on its function TFDDAptTableAdapter.GetUpdateRowCommand
calls oConn.CreateCommandGenerator(oCmdGen, GetSelectCommand);
Whenever that second parameter (called ACommand: IFDPhysCommand) is not nil, the name of the table is returned concatenating the user name (in a function called TFDPhysCommandGenerator.GetFrom).
If we add 'MetaCurSchema=MAIN_SCHEMA' to the TFdConnection params, it works with the applications that not use a pooled connection, but We have several process that use a pooled connection with the same params, even MetaCurSchema param, but it doesn't work
What can we do?
thanks for your help
What I understand is that you would do better making the connection avoid the use of any schema name, rather than specifying it. Also, keeping in mind that you already use public synonyms.
So, according to the documentation:
Full object names
FireDAC supports full object names, which include the catalog and/or schema names.
When a short object name is specified to StoredProcName, TableName, etc, they will be expanded into the full object names, using the current catalog and/or schema names. To override or avoid usage of the current catalog and/or schema names, use the MetaCurCatalog and MetaCurSchema connection definition parameters. For example:
[Oracle_Demo]
DriverID=Ora
...
MetaCurCatalog=*
MetaCurSchema=*
~ Source: Object Names (FireDAC) - docWiki
MetaCurSchema
Specifies the current schema for the application. If not specified, then its value will be received from the DBMS. When an application is asking for metadata and do not specify a schema name, then FireDAC will implicitly use the current schema.
If MetaCurSchema is '*', then schema names will be me omitted from the metadata parameters.
~ Source: Common Connection Parameters (FireDAC) - docWiki
That asterisk (*) should do the trick, let us know if that's the case.

How to export data from multiple tables in a dynamic fashion in plsql

I have an plsql SP that creates multiple tables on a daily basis, however the number of tables are not always the same, however, the generated tables have a predefined pattern in its name.
I'm trying to create a plsql SP who exports those tables to a csv or excel file, based on a list input with the name of the generated tables.
Any ideas for achieving this rather than use PLSQL or is there any useful way of achieve this with it?
Thanks in advance
i think this question is very abstract and can have many solutions. Here is a solution that you can try if you want to create Excel from your data.
First you need a tool to create an excel file. Anton Scheffer has developed a package that you can use to create easily an Excel file from a cursor. Have a look: Create an Excel-file with PL/SQL.
Next, you can determine the created tables and create a query string that you can pass as parameter into query2sheet procedure of the Anton's package. All Tables you can find in user_tables View.
So your code could looks like:
for rec in (select * from user_tables where 1=1 /* or condition you need to filter the correct tables*/)
loop
-- as_xlsx - is a package created by Anton Scheffer
as_xlsx.query2sheet( 'select * from '||rec.table_name );
-- MY_DIR is a database Directory that you have to create
as_xlsx.save( 'MY_DIR', rec.table_name||'.xlsx' );
end loop;
Edit:
If you want to create a csv files, so you can use a package developed by William Robertson, Ref cursor to CSV converter.
The usage is quite similar to the Excel package.

load SQL statements from a file using clojure.java.jdbc

The REST call is sending the branchId and emplId to this exec-sql-file method. I am passing these as a parameter. I am not able to execute the SQL statement when I pass branch_id = #branchid and empl_id = #emplid. But when I hardcode the branch_id = 'BR101' and empl_id = 123456 then it is working. Any suggestion how to get the branch_Id and empl_Id in my some-statements.sql?
(defn exec-sql-file
[branchid emplid]
(sql/with-db-connection (db-conn)
(sql/db-do-prepared conn
[branchid emplid (slurp (resource "sql/some-statements.sql"))])))
some-statements.sql have this query
DELETE from customer where branch_id = #branchid and empl_id = #emplid;
I am executing this from REPL as
(exec-sql-file "BR101" 123456)
I grab the code snippet from the below post.
Is it possible to patch load SQL statements from a file using clojure.java.jdbc?
There is no simple way to do this as your approach requires that you have to provide parameters to multiple SQL statements in one run. Another issue is that Java's PreparedStatement (used under the hood by clojure.java.jdbc) doesn't support named parameters, so even if parameters to multiple SQL statements done using a single prepared statement would have to be provided for every placeholder (?).
I would suggest following solutions:
use multiple prepared statements (so separate clojure.java.jdbc/execute! calls) for each of the SQL statement you want to execute wrapped in a single transaction (each SQL could be read from a separate file). You could also use some helper library like YeSQL to make loading your SQL statements from external files and exposing them as functions you could call as ordinary Clojure functions. It would be simple but if you change the number of statements you would like to execute, then you need to change your code
create a stored procedure and call them from Clojure providing the parameters - this will define an interface for some DB logic which will be defined on the DB side. Unless you change the interface of your stored procedure you can modify its implementation without changing your Clojure code or redeployment
implement your own logic of interpolating named parameters into your "multistatement" SQL file. The issue is to appropriately escape parameters' values so your code is not vulnerable to SQL injection. I would discourage this solution.

how to generate and run dynamic sql in vertica

I am planning to maintain logic for a derived field in a look up table and thinking of running dynamic sql statements real time.
for example , if field company_type is derived based on the following logic
case when substr(company_code,1,3)='XYZ' then substr(comapny_code,4,6)
when substr(company_code,1,3)='ABC' then substr(company_code,7,9)
else substr(company_code,1,3) end;
to avoid code changes whenever a new case is provided by business i want to maintain the logic in a look up table like following
order src_filed src_value
--------------------------------------------------------------
1 substr(company_code,1,3)='XYZ' substr(4,6)
2 substr(company_code,1,3)='ABC' substr(7,9)
3 substr(1,3)
now based on the data in look up table , i want to be able to generate case statement dynamically and to be able to run the case statemnent. Note that i need to run that dynamic sql as part of another sql where i query source tables that has source fields.
This feature doesn't exist yet in Vertica. Hopefully in a future version. Easiest method is to write a script to execute the sql via vsql or jdbc.

Informix "SERIAL" to Oracle NUMBER/Sequence/Trigger in Pro*C

I'm trying to convert some Informix ESQL to Oracle Pro*C. In the existing Informix code the "SERIAL" data type was used to indicate automatically incrementing columns. According to the Oracle documentation, the Oracle Migration Workbench for Informix should be able to handle this, and it explains that it converts the "SERIAL" data type into a "NUMBER" with an associated Oracle sequence and trigger. However, when trying to run the tool it simply replaces the word "SERIAL" with "ERROR(SERIAL)", so I've been trying to manually add in the trigger/sequence.
Their example here: http://docs.oracle.com/html/B16022_01/ch2.htm#sthref112 shows a way that this can be done. The sequence appears to be fairly straight forward, however when trying to create a trigger like so:
CREATE TRIGGER clerk.TR_SEQ_11_1
BEFORE INSERT ON clerk.JOBS FOR EACH ROW
BEGIN
SELECT clerk.SEQ_11_1.nextval INTO :new.JOB_ID FROM dual; END;
The Pro*C preprocessor picks up the "CREATE" keyword here, and decides that I'm not allowed to use the host variable ":new.JOB_ID", because host variables cannot be used in conjunction with "CREATE" statements.
My question is, is there some way to create a trigger that links an Oracle sequence to a particular column without using a host variable to specify the column name? The Oracle documentation seems to indicate that their migration tool should be able to cope, which means there must be some way of doing this. However all the examples of the trigger use that I have found all use the host variable which causes the preprocessor to complain.
Thank you for your time.
(Note: I've used the trigger/sequence/column names from the example in the Oracle documentation in the example above.)
I managed to resolve the issue by using an "EXEC SQL EXECUTE IMMEDIATE" statement.
char sql_buf[4096+1];
snprintf(sql_buf, 4096, <sql>);
EXEC SQL IMMEDIATE :sql_buf;
This bypasses the preprocessor and therefore allows the statement through without complaint.
It is impossible to create a trigger that links an Oracle sequence to a particular column without using a "host variable" to specify the column name. By the way it isn't "host variable" - just reference. The same trigger may fire on update and insert for example, so you have to specify what you are referencing: new or old variables. You can do it in MS-SQL but not in Oracle.

Resources