Loading the target tables - informatica-powercenter

If suppose I have source with seven records from that first three must go in 3 target instances and 4th record again have to go into first target how can I achieve it?

Here is one way to achieve this result.
I use a sequence transformation to generate a series of numbers (starting with 1., increment by 1..).
I then route the table rows into one of the three targets based on this sequence number (using mod(nextval,3)) which will result in 0,1 or 2. Here are the three groups for the Router.
Group 1 : MOD(NEXTVAL,3)=0
Group 2 : MOD(NEXTVAL,3)=1
Group 3 : MOD(NEXTVAL,3)=2
Also, could you please explain why you need the table be loaded into multiple instances?
I have never really come across such scenarios before.

Related

Using PowerQuery, is there a way to view a sheet of different-sized groups of data as a single table?

I have a sheet of non-table groups and would like to view them as a single table.
Each group consists of 4 or 5 rows and 2+ columns with 1 or more blank rows/columns between.
Overall, the groups are organized into rows and columns on the sheet. There shouldn't be more than 3 groups in a row, but some rows may have blank spaces for future groups.
New groups and group columns are added regularly so existing groups can be relocated on the sheet anytime.
The group names are a unique combination of letters and a number. Unfortunately, they are not prefixed "Group" or numbered consecutively like in my example. Giving a generic example that conveys all criteria is harder than it looks 😅.
This is a shared document so I'd like to avoid making structural changes that would affect other users.
I've experimented with some of the transform options, but I'm new to PQ and didn't make much progress.
This answer to a similar question is a step in the right direction, but it looks like I'll need additional steps since my starting data isn't quite as consistent.
Thank you for your time.
Before Example
After Example

How to get the sum of values of a column in tmap?

I have 2 columns - Matches(Integer), Accounts_type(String). And i want to create a third column where i want to get proportions of matches played by different account types. I am new to Talend & am facing issue with this for past 2 days & did a lot of research but to no avail. Please help..
You can do it like this:
You need to read your source data twice (I used tFixedFlowInput_1 and tFixedFlowInput_2 with the same data). The idea is to calculate the total of your matches in tAggregateRow_1, it simply does a sum of all Matches without a group by column, then use that as a lookup.
The tMap then joins your source data with the calculated total. Since the total will always be one record, you don't need any join column. You then simply divide Matches by Total as required.
This is supposing you have unique values in Account_type; if you don't, you need to add another tAggregateRow between your source and tMap_1, in order to get sum of Matches for each Account_type (group by Account_type).

Laravel schema column

I'm trying to make numeric column in my table but I'm not sure which option would be the best fit for my need
logic
numbers start from 4 figure up e.g. 1000
no ending limitation can goes up like: 999999999999999999999999999999999999999999
increased incrementally e.g. last number was 1000, next will be 1001 (not random numbers)
Question
Laravel provided several option that can do my job but I need help to know which is the best for my purpose
bigIncrements
bigInteger
unsignedBigInteger
ALSO there is one option ->autoIncrement() should I add that too?

Generate pairs from list that hasn't already historically existed

I'm building a pairing system that is supposed to create a pairing between two users and schedule them in a meeting. The selection is based upon a criteria that I am having a hard time figuring out. The criteria is that an earlier match cannot have existed between the pair.
My input is a list of size n that contains email addresses. This list is supposed to be split into pairs. The restriction is that this match hasn't occured previously.
So for example, my list would contain a couple of user ids
list = {1,5,6,634,533,515,61,53}
At the same time i have a database table where the old pairs exist:
previous_pairs
---------------------
id date status
1 2016-10-14 12:52:24.214 1
2 2016-10-15 12:52:24.214 2
3 2016-10-16 12:52:24.214 0
4 2016-10-17 12:52:24.214 2
previous_pair_users
---------------------
id userid
1 1
1 5
2 634
2 553
3 515
3 61
4 53
4 1
What would be a good approach to solve this problem? My test solution right now is to pop two random users and checking them for a previous match. If there exists no match, i pop a new random (if possible) and push one of the incorrect users back to the list. If the two people are last they will get matched anyhow. This doesn't sound good to me since i should predict which matches that cannot occur based on my list with already "existing" pairs.
Do you have any idea on how to get me going in regards to building this procedure? Java 8 streams looks interesting and might be a way to solve this, but i am very new to that unfortunately.
The solution here was to create a list with tuples that contain the old matches using group_concat feature of MySQL:
SELECT group_concat(MatchProfiles.ProfileId) FROM Matches
INNER JOIN MatchProfiles ON Matches.MatchId = MatchProfiles.MatchId
GROUP BY Matches.MatchId
old_matches = ((42,52),(12,52),(19,52),(10,12))
After that I select the candidates and generate a new list of tuples using my pop_random()
new_matches = ((42,12),(19,48),(10,36))
When both lists are done I look at the intersection to find any duplicates
duplicates = list(set(new_matches) & set(old_matches))
If we have duplicates we simply run the randomizer again X attemps until I find it impossible.
I know that this is not very effective when having a large set of numbers but my dataset will never be that large so I think it will be good enough.

Talend loop for each record

Hi i am designing a data generation job.
my job is something like this
tRowGenerate --> tMap --> tFileOutputDelimited.
Lets say my tRowGenerate produces 5 columns with 2 records. I want to iterate for this records i.e for each record I want to iterate certain number of times.
for record 1 iterate 5 times to produce further data.
for record 2 iterate 3 times to produce further data.
Please suggest how to apply this multiply by xi logic. where xi for each record can change.
Thanks!
If you want to loop on the data generated from the tRowGenerator you can use a tLoop where you put the call to your business rule to determine the number of loops or when stop looping.
An example job might look like:
Logic of flow:
row1 is a main connection taking the generated values to the tFlowtoIterate that stores them in global variables;
the iterate link activates the tLoop that can use the values stored in the global vars to activate your business rule (to have the number of loops or tho ask if continue or stop);
the tLoop activate the tJavaFlex that uses the stored global vars to produce the output you like and pass it to the tFileOutputDelimited with a main link (row2).
You have to activate the append flag on the tFileOutputDelimited to keep the data from the different loops. If you need you can add a tFileDelete at the beginning to empty the output file before a new processing round.

Resources