Handling Delete followed by Insert in Informatica Designer - informatica-powercenter

I am working on Informatica PowerCenter Designer 8.1.1
I have a source table with three columns, which are
PORT_ID
ISSUE_ID
MKT_VAL
I need to sync the data from my source table to a target table which contains the same three tables on a different database.
There's a 1:n relationship between PORT_ID and ISSUE_ID
While doing this data Sync, I have to do a DELETE followed by INSERT, reason being, the number of ISSUE_ID mapped to a PORT_ID can change. Let's say that intially, the data was like this in Source and Target:
PORT_ID ISSUE_ID
1 A
1 B
1 C
The data in source gets changed to:
PORT_ID ISSUE_ID
1 A
1 B
1 D
Due to this, during my sync, I have to first delete all rows mapped to PORT_ID = 1 and then Insert the incoming records.
I am not able to figure out how I can get this done in mapping designer. Can someone give me some inputs?

The most common way this is done is using a pre-query. If port_id and issue_id are unique within the table, you could use....
delete from tgt_table
where (port_id, issue_id)
not in (select port_id, issue_id
from src_table
);
commit;
Second Way:
If these two columns can be added as a key in your mapping, then you can "check" the treat target rows as "insert, update, delete" (all three) to make sure the target data is the same as the source data. In most cases, however, business rules are more complex than this, so this feature is rarely used.
Another common implementation is to "Mark rows for delete" based on a lookup on the target table.
Source -> Lookup (target_table) ->
exp (flag to see if the value exists) ->
mark for delete ->
Delete using update_strategy_transformation

Write a simple stored proc which does the following:
1) Delete statement (Given by Rajesh)
(delete from tgt_table
where (port_id, issue_id)
not in (select port_id, issue_id
from src_table
);
commit;
)
2) Insert Statement
Insert into tgt_table where not in src_table
commit;
3) Use dummy as source and target in mapping, and call the stored proc using STORED_PROCEDURE_TRANSFORMATION.

You can create dynamic lookup on the source table.
What you can do to create dynamic lookup,
-> go to the lookup properties,
-> check the dynamic lookup cache box
-> and then check the insert else update box.
As soon as you do that one new port NewLookupRow will appear in the ports tab.
You can use this port to check whether record is insert or update with following corresponding values
0 is no change
1 is insert
2 is update
Now you can update the target accordingly.
Hope this helps..
Cheers.

I dont think we need a dynamic lookup here, as the requirement is not having dups in the source..
Why dont you do a regular lookup and update the records using Update strategy instead of delete and insert?

Related

How can I merge two tables using ROWID in oracle?

I know that ROWID is distinct for each row in different tables.But,I am seeing somewhere that two tables are being merged using rowid.So,I also tried to see it,but I am getting the blank output.
I have person table which looks as:
scrowid is the column which contains rowid as:
alter table ot.person
add scrowid VARCHAR2(200) PRIMARY KEY;
I populated this person table as:
insert into ot.person(id,name,age,scrowid)
select id,name, age,a.rowid from ot.per a;
After this I also created another table ot.temp_person by same steps.Both table has same table structure and datatypes.So, i wanted to see them using inner join and I tried them as:
select * from ot.person p inner join ot.temp_person tp ON p.scrowid=tp.scrowid
I got my output as empty table:
Is there is any possible way I can merge two tables using rowid? Or I have forgotten some steps?If there is any way to join these two tables using rowid then suggest me.
Define scrowid as datatype ROWID or UROWID then it may work.
However, in general the ROWID may change at any time unless you lock the record, so it would be a poor key to join your tables.
I think perhaps you misunderstood the merging of two tables via rowid, unless what you actually saw was a Union, Cross Join, or Full Outer Join. Any attempt to match rowid, requardless of you define it, doomed to fail. This results from it being an internal definition. Rowid in not just a data type it is an internal structure (That is an older version of description but Oracle doesn't link documentation versions.) Those fields are basically:
- The data object number of the object
- The data block in the datafile in which the row resides
- The position of the row in the data block (first row is 0)
- The datafile in which the row resides (first file is 1). The file
number is relative to the tablespace.
So while it's possible for different tables to have the same rowid, it would be exteremly unlikely. Thus making an inner join on them always return null.

Oracle SQL / PLSQL : I need to copy data from one database to another

I have two instances of the same database, but data is only committed to the "original" one. I need to copy inserted data from certain tables and commit them to the same tables in the second DB automatically. How can I do it?
I've already created synonyms for the tables in the second DB on original and within a specially prepared trigger I tried to use INSERT INTO ... statement with :new. but it is causing the data to not be committed anywhere and I receive Oracle Errors like:
ORA-02291: integrity constraint (PRDBSHADOW.FK_ED_PHY_ENT) violated.
Here is my trigger code
create or replace TRIGGER INS_COPY_DATA
AFTER INSERT ON ORIGDB.TABLE_A
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
BEGIN
insert into COPY_TABLE_A(val1,val2,val3,val4) values (:new.val1, :new.val2, :new.val3, :new.val4);
END;
I think the entry in parent table is missing here. At least the FK ending of constraint is telling me so.
It means you need to insert first all the data into a "parent" table in order to be able to insert records in a "child".
For example the table auto_maker is having 3 rows only: Audi, Peugeot, and Honda.
Another table named "model" has 2 columns "maker" and "model". "maker" is a foreign key referencing to the "auto_maker" table.
It means in the models table are only the records allowed whose "maker" column value exists in "auto_maker" table.
In other words only these are available:
maker model
Audi A4
Peugeot 308
Honda Accord
Of course you can enter every model you wish, but "maker" value has to exist in the auto_maker table.
This is what probably happen - the trigger tries to insert a data in a column which is referencing to a "parent" table and the :new value just doesn't exist.
The following script will let you know what table you need to fill first.
select aic.index_owner, aic.table_name, aic.column_name
from all_constraints uc,
all_ind_columns aic
where aic.INDEX_NAME = uc.r_constraint_name
and uc.table_name = 'TABLE_A'
and uc.constraint_type = 'R';
If the query returns something just create similar triggers on those tables with similar logic you already have

Oracle 12c - refreshing the data in my tables based on the data from warehouse tables

I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful

How to update data in a non primary key table

I have one table - TableA. This is source and target also. Table doesn't have any primary key. I am fetching data from TableA, then doing some calculation on some fields and updating them in same tableA. Now how can I update data when it doesn't have any primary key or composite key? Second question - If joining two columns make a record unique then how can I use it in informatica?Plz help
You can define the update statement in the target. There is that properties.
Still you have to make informatica to perform an update, not insert. To do that you need to use the update strategy.
I think you don't need in this solution to make any PK on that table, because you will use your own update statement, but please verify this.
To set the fields and make proper where condition for update you need to use :TU alias in the code. TU -> means the update strategy before the target.
Example:
update t_table set field1 = :TU.f1 where key_field = :TU.f5
If you don't want (or can't) create primary key in your table in database you can just define it in informatica source
If record unique as combination of two columns just mark both of them as primary key in informatica source

Bulk Copy from one server to another

I've one situation where I need to copy part of the data from one server to another. The table schema are exactly same. I need to move partial data from the source, which may or may not be available in the destination table. The solution I'm thinking is, use bcp to export data to a text(or .dat) file and then take that file to the destination as both are not accessible at the same time (Different network), then import the data to the destination. There are some conditions I need to satisfy:
I need to export only a list of data from the table, not whole. My client is going to give me IDs which needs to be moved from source to destination. I've around 3000 records in the master table, and same in the child tables too. What I expect is, only 300 records to be moved.
If the record exists in the destination, the client is going to instruct as whether to ignore or overwrite case to case. 90% of the time, we need to ignore the records without overwriting, but log the records in a log file.
Please help me with the best approach. I thought of using BCP with query option to filter the data, but while importing, how do I bypass inserting the existing records? How do I overwrite, if that is needed?
Unfortunately BCPing into a table is an all or nothing deal, you can't select rows to bring in.
What I'd do is . . .
Create a table on the source
database, this will store the ID's
of the rows you need to move. You
can now BCP out the rows that you
need.
On the destination database, create
a new Work In Progress table, and
BCP the rows in there.
Once in there you can write a script
that will decide whether or not a
WIP row goes into the destination
table, or not.
Hope this helps.
Update
By work in progress (WIP) tables I don't mean #temp tables, you can't BCP into a temp table (at least I'd be very sprprised if you could).
I mean a table you'd create with the same structure of the destination table, bcp into that, script the WIP rows to the destination table then drop the WIP table.
You haven't said what RDBMS you're using, assuming SQL Server, something like the following (untried code) . . .
-- following creates new table with identical schema to destination table
select * into WIP_Destination from Destination
where 1 = 0
-- BCP in the rows
BULK INSERT WIP_Destination from 'BcpFileName.dat'
-- Insert new rows into Destination
insert into Destination
Select * from WIP_Destination
where not id in (select id from Destination)
-- Update existing rows in destination
Update Destination
set field1 = w.field1,
field2 = w.field2,
field3 = w.field3,
. . .
from Destination d inner join WIP_Destination w on d.id = w.id
Drop table WIP_Destination
Update 2
OK, so you can insert into temporary tables, I just tried it (I didn't have time the other day, sorry).
On the problem of the master/detail records (and we're now moving off the subject of the original question, if I were you I'd open a new question for this topic, you'll get more answers than just mine)
You can write an SP that will step through the new rows to add.
So, you're looping through the rows in your temp table (these rows have the original id on them from the source database), insert that row into the Destination table, use SCOPE_IDENTITY to get the id of the newly inserted row. Now you have the old Id and the new ID, you can create an insert statement that will insert statement for the detail rows like . . .
insert into Destination_Detail
select #newId, field1, field2 . . . from #temp_Destination_Detail
where Id = #oldId
Hope this helps [if it has helped you are allowed to upvote this answer, even if it's not the answer you're going to select :)]
Thanks
BW

Resources