I want to get the latest inserted record from target table and update the field in source table.
Source:
Name company targetID
Jane YYYY Null
Basically, I am looping the files in SSIS and inserting the target tables via webservice(script task). Each record is created per file . I want to capture the latest ID created in target and update the source field.
Target:
ID name company
1 Jane YYYY
after the row is created in target for a file, I want my source to be
Source:
Name company targetID
Jane YYYY 1
If the row is not created ,then do not update source.
I checked for scope_identity(),ident_current but not sure how to use them in my situation as I am inserting target tables via webservice.
Also, my target table can have more than 1 identity column. How do I use Ident_Column on this table?
Any help is appreciated.
Related
I'm using Talend Open Studio for Data Integration.
I have tables that are generated every day and table names are suffixed by date like so
dailystats20220127
dailystats20220126
dailystats20220125
dailystats20220124
I have two-part question.
I want to look at the table which has yesterday's date in it so sysdate - 1 and I want to fetch data from yesterday
select 'dailystats' ||to_char(sysdate - 1,'YYYYMMDD') TableName
from dual;
How do I retrieve schema for a dynamic name?
How do I pull data from that table.
I've worked with static table names and its a straightforward process.
If the schema is always the same, you just define it once in your input component.
In your input component set the sql as :
"select [fields] from dailystats"+ TalendDate.formatDate("yyyyMMdd", TalendDate.addDate(TalendDate.getCurrentDate(), -1, "dd"))
I'm using Informatica PowerCenter 9.1.0 and to put it simple I have two identical tables as source (table A) and target (table B). The columns are ID and EMAIL.
I need to make a workflow where the very first time it runs all the records are copied from table A to B.
Then every day I need to update in the target table B the rows modified in A (the mail can change). If in the source table the record is deleted I still want to see it in the target table.
I used these values
Treat source rows as : "Insert"
Then in the Mapping tab I have checked the Attribute "Insert" and "Update as Update"
In the first time I have all the record in the target table but then if after few days some emails change I see no update. I still see the first email inserted the first time.
I changed the value of Treat source rows as to "Update" but in the first run (table B is empty ) it copies no row.
It's possible to have the workflow that in the first run insert all the rows the first time then in the next ones update the records without change the Treat source rows as value?
Select the option "Update else insert" in the mapping tab. Keep "treat source rows as" as Update
I have question about Autoincrement ID for my 'dimension' and IKM: Incremental Update.
I have source table with only one column: SUPPLIER_NAME. It has 23 rows with suppliers name.
I have target table with two columns: SUPPLIER_ID, SUPPLIER_NAME
Next I want on SUPPLIER_ID create autoincrement ID for every new row and use IKM: increment update - where someone add new supplier I only want update table (add new rowe) and add for this supplier new ID (next value for autoincrement).
How can I do this?
I create sequence on DB like:
Create sequence autoinc start with 1
increment by 1
minvalue 1
maxvalue 1000000;
In ODI I create sequence:
AutoIncrementDIm --> Increment: 1,
Native seuqence - native sequence name: autoinc
Next I create ODI mapping:
Source table (with one colum) map to target table (with ID and NAME).
Map supplier_name to supplier_name
For ID I use: #NFS_HD.AutoIncrementDim_NEXTVAL
In logical part I set integration type: Incremental Update
In physical part I set for IKM: IKM Oracle Merge
For my first running everthing it's ok. I have autoincrement from 1 to 23 for every supplier.
But when I have new rows with new supplier name in source table and run my mapping I get something like:
Results
New row (with new supplier) has 47 ID ... I think that s because sequence was running for every rows.
What I must change to correct or what it is the better solution to do this?
On the logical mapping, click on your SUPPLIER_ID target attribute. In the property pane in the Target tab, unselect the Update checkbox. It means that this attribute will not be used in the update query.
Also make sure that the SUPPLIER_NAME attribute is set as a key so the IKM use it to know when it should do an insert or an update.
there is one search page in ORACLE ADF with header & line section ... in header section i search for the customer id (i can also search with any column name like customer name , organization id etc) then in line section it will show me all the customer related to that customer id now in line section i make that customer id a hyperlink and as user click on that link (customer id) it will show a popup contain all the details related to that customer (around 700 column)..column names are like column 1,column 2,column3 ....column 700 like that and the name of this column 1 ,column 2 up-to column 700 is stored in some other table so how can i change that column1 ,column2 ...column 700 with their actual name (which is stored in some different table ).
One thing you might trey is to Query the other table using this:How can I get column names from a table in Oracle? to get the column names and then use the VO api to change the hint for the column: ADF how to convert column name to attribute anme
You use two VOs in the page.
VO1 fetches column names, the other VO2 fetches column values.
Create a Form based on VO2, then go to each column and update the label attribute for that column to point to the right value from VO1.
Oracle PL SQL question: One table should be archived day by day. Table counts about 50.000 records. But only few records during a day are changed. Second table (destination/history table) has one additional field - import_date. Two days = 100.000 records. Should be 50.000 + feq records with informations about changes during a day.
I need one simple solution to copy data from source table to destination like a "LOG" - only changes are copied/registered. But I should have possibility to check dataset of source table from given day.
Is there such mechanism like MERGE or something like that?
Normally you'd have a day_table and a master_table. All records are loaded from the day_table into master and only master is manipulated with the day table used to store the raw data.
You could add a new column to master such as a date_modified and have the app update this field when a record changes, or a flag used to indicate it's changed.
Another way to do this is to have an active/latest flag. Instead of changing the record it is duplicated with a flag set to indicate this is a better/old record. This might be easier for comparison
e.g. select * from master_table where record = 'abcd'
This would show 2 rows - the original loaded at 1pm and the modified active one changed at 2pm.
There's no need to have another table, you could base a view on this flag then
e.g. CHANGED_RECORDS_VIEW = select * from master_table where flag = 'Y'
Once i faced a similar issue. And please find the solution below.
Tables we had :
Master table always has records it and keeps adding up.
One backup table to store all the master records on daily basis.
Solution:
From morning to evening records are inserted and updated into the master table. The concept of finding out the new records was the timestamp. Whenever a new record is inserted/updated then corresponding timestamp is added and kept.
At night, we had created a job schedule to run a procedure (Create_Job-> please check oracle documentations for further learning) which runs exactly at 10:00 pm to bulk collect all the records available in master table based on today's date and insert into the backup table.
This scenrio which i have explained to you will help you. Please check out the concept of Job scheduling which will help you. Thank you .