How can I execute a testsuite containing multiple flows? - ipaf

How can I execute a testsuite containing mulitple flows in PAF?

To execute multiple flows, you should put the flow ids in init file as flow.ids=.Each flow id should be separated by comma(,)
and each flow will be executed once if loop.flow = 1
Eg:
flow.ids=flow1,flow2,flow3
All the flows should be in the same .xml file.

Related

How to start a Spring Batch step inside a ItemProcessor

Here is my scenario:
a1. read records from table A
a2. process these records one by one and generate a new temp table B for each record
b1. read records from table B, process these records data and save it in a file
a3. tag the record from table A as finished status
A pseudo code to describe this scenario:
foreach item in items:
1. select large amount data where id=item.id then save the result to temp table_id
2. process all records in table_id then write then to a file
3. update item status
4. send message to client
This is my design:
create a Spring Batch job, set a date as its parameter
create a step1 to read records from table A
create a step2 to read records from temporary table B and start it in the processor of step1
I check the Spring Batch docs, I didn't find any related introduction about how to nest a step into a step's processor. seems the Step is the minimum unit in Spring Batch and it cannot be split.
Update
Here is the pseudo code about what I did now to solve the problem:
(I'm using spring boot 2.7.8)
def Job:
PagingItemReader(id) :
select date from temp_id
FlatFileItemWriter:
application implement commandlinerunner:
items = TableAReposiroy.SelectAllBetweenDate
for item : items:
Service.createTempTableBWithId(item.id)
Service.loadDataToTempTable(item.id)
job = createJob(item.id)
luancher.run(job)
update item status
A step is part of a job. It is not possible to start a step from within an item processor.
You need to break your requirement into different steps, without trying to "nest" steps into each others. In your case, you can do something like:
create a Spring Batch job, set a date as its parameter
create a step1 to read records from table A and generate a new temp table B
create a step2 to read records from temporary table B, process these records data and save it in a file and tag the record from table A as finished status
The writer of step2 would be a composite writer: it writes data to the file AND updates the status of processed records in table A. This way, if there is an issue writing the file, the status of records in table A would be rolled back (in fact, they are not processed correctly as the write operation failed, in which case they need to be reprocessed).
You should have single job with two steps, stepA and stepB, spring batch does provide provision for controlled flow of execution of steps, you can sequentially execute two steps. For each item Once stepA reaches its writer and writes data, stepB will start. You can configure stepB to read data written by stepA.
You can also pass data between steps using Job Execution context, Once stepA ends put data in Job Execution context, it can be accessed in stepB once it starts. This can help in your case because you can pass item identifier which stepA picked for processing and pass it to stepB so that stepB can have this identifier in its writer to update its final status.

How to skip the target write in case zero record count in Informatica Cloud (IICS)?

I have one use case to create incremental Data ingestion pipeline from one Database to AWS S3. I have created a pipeline and it is working fine except for the one scenario where no incremental data was found.
In case of zero record count, it is writing the file with a header-only (parquet file). I want to skip the target write when there is no incremental record.
How I can implement this in IICS?
I have already tried to implement the router transformation where I have put the condition if record count > 0 then only write to target but still it is not working.
First of all: the target file gets created even before any data is read from source. This is to ensure the process has write access to target location. So even if there will be no data to store, an empty file will get created.
The possible ways out here will be to:
Have a command task check the number of lines in output file and delete it if there is just a header. This would require the file to be created locally, verified, and uploaded to S3 afterwards e.g. using Mass Ingestion task - all invoked sequentially via taskflow
Have a session that will first check if there is any data available, and only then run the data extraction.

Create SPRING BATCH JOBS in RUN TIME and Execute them

I have a requirement where i will take input CSV files from one folder, process them (DB lookup and validation) one after one and generate new output file for each input file. I need to choose the input files at RUN TIME based on the DB query on user object which will tell what are the qualified files (like out of 400 files in folder - 350 may be qualified to process and I need to generate 350 output files). I want to use SpringBatch and want to create one JOB for one File. Any reference or sample code to create JOBs at RUN TIME and Execute them?

Spring Batch: How to combine multiple task or get 2 task in single

I am bit puzzled here, I need to do a task similar to the following scenario with Spring Batch
Read Person from repository ==> I can use RepositoryItemReader
(a) Generate CSV file (FlatFileItemWriter) and (b) save CSV file in DB with the generated date (I can use RepositoryItemWriter)
But here I am struggling to understand how I can give generated CSV file output of 2a to save in DB 2b.
Consider CSV File has approx 1000+ Person Data which are processed for a single day.
is it possible to merge 2a & 2b? I thought about CompositeItemWriter but as here we are combining 1000+ employee in CSV file so it won't work.
Using a CompositeItemWriter won't work as you will be trying to write an incomplete file to the database for each chunk..
I would not merge 2a and 2b. Make each step do one thing (and do it well):
Step 1 (chunk-oriented tasklet): read persons and generate the file
Step 2 (simple tasklet) : save the file in the database
Step 1 can use the job execution context to pass the name of the generated file to step 2. You can find an example in the Passing Data To Future Steps section. Moreover, with this setup, step 2 will not run if step 1 fails (which makes sense to me).

Spring Batch & Integration Flat File Generation Pattern

We have a use case where we receive data in flat files which we load into an Oracle DB using Spring Batch. Post data load in Oracle, we have to distribute the data in form of flat files to several consumers. The data selection criteria depends on some pre-decided values in some fields of the data.
We have a design in place which generates a list which contains objects that can be passed to a Spring Batch job as job parameter to generate the flat files needed to be sent to the data consumers.
Using a Splitter component, I can put the individual objects into a channel and plug a JobLaunchingGateway to launch a batch job to generate the flat file.
Need help on how I can launch multiple batch jobs in parallel using JobLaunchingGateway so that I can generate files in parallel.
A setup is already in place to FTP the files to consumers. We do not need to worry about that.
Use an ExecutorChannel with a task executor before the JobLaunchingGateway.

Resources