Custom SpringBatch ItemReader, that read form JPA repository using custom query - spring

I am new to Spring Batch and I would like to seek the advice of the experienced programmer on how to implement the following:
I am writing an API that triggers a batch task to generate a report from a list of transactions from JPA repo.
In spring batch, there is ItemReader, ItemProcessor, ItemWriter for each step.
How should I implement my ItemReader such that it can execute the following:
Generate a custom query from the parameters set in the jobParameters (obtained from the API query)
Using the custom query in 2 to obtain the list of transactions from the Transactions JPA repo, which will be processed in the ItemProcessor and subsequently generate reports through ItemWriter.
Question: How should I write the ItemReader, I looking at JpaCursorItemReader (dk if it is the right one) but could not find examples of implementation online to refer to. Any help is appreciated. Thank you.
I am at the stage of trying to understand how Spring batch works, I hope to seek proper guidance from experts in this field of the direction to go to accomplished the mentioned task.

Generate a custom query from the parameters set in the jobParameters (obtained from the API query)
You can define a step-scoped bean and inject job parameters to use them in your query:
#Bean
#StepScope
public JpaCursorItemReader jpaCursorItemReader(#Value("#{jobParameters['parameter']}") String parameter) {
return new JpaCursorItemReaderBuilder()
.queryString("..") // use job parameter in your query here
// set other properties on the item reader
.build();
}
You can find more details on step-scoped components in the documentation here: Late Binding of Job and Step Attributes.

Related

Is it possible to add a spring interceptor in a spring security filter chain?

I need to retrieve the path param map in my spring boot application and I noticed that the following line of code:
final Map<String, String> pathVariables = (Map<String, String>) request
.getAttribute(HandlerMapping.URI_TEMPLATE_VARIABLES_ATTRIBUTE);
only works when in a spring boot InterceptorHandler class, I guess because the "path param" abstraction is spring's.
Problem is that I'm using spring security, defining my own filter chain:
http.addFilterBefore(...).addFilterAfter(myFilterNeedingPathParam) //series of filters
And the last filter needs the path params map. My idea would be to put the interceptor before or, if you want, make sure that the last filter is executed after.
Is there a way to do so?
When in the Spring InterceptorHandler, check the actual class of the request, you may need to cast your request to that. Also look into the source code for that class and see if it's pulling that value from a ThreadLocal of some sort.

Is it possible to have multiple readers for one step in spring batch?

I need to create a spring batch wherein i need to fetch data from 2 different tables.
This will be the reader part.
The problem is i need to have 2 reader queries. And only one of them will be called based on the condition.
But the writer will be same.
So tasklet becomes same basically.
Can i have 2 readers inside one single step which will be called according to the condition..??
Something like inside the spring batch xml:
if (condition)
then reader1
else reader2
...
....
......
<reader1 id=".." class="..">
</reader1>
.....
........
<reader2 id=".." class="..">
</reader2>
Have you considered using the conditional control flow patterns Spring Batch offers? I think it could lead to simpler construction and adhere to some of the core patterns spring batch encourages.
You basically program the "condition" that you want a step to be called under. So define two steps, say step1 which has a reader, processor, writer for one object type and step2 which has a reader, processor, writer for the other type. Perhaps the writers are shared between types as well. You then can define control flow like so:
#Bean
public Job myJob() {
JobBuilder jobBuilder = jobs.get("Spring Batch: conditional steps");
return jobBuilder.start(determineRoute())
.on("item1").to(flow1())
.on("item2").to(flow2()).end()
.build();
}
In the example determineRoute() is a tasklet that returns custom ExitStatus values item1 or item2 & flow1 and flow2 are different flows (or steps) to handle each object.
See here in their docs: https://docs.spring.io/spring-batch/docs/current/reference/html/step.html#conditionalFlow
Edit: you can also do something similar with a JobExecutionDecider https://www.baeldung.com/spring-batch-conditional-flow#2-programmatic-branching-withjobexecutiondecider

Axon State-Stored Aggregate Test IllegalStateException

PROBLEM: Customer technical limitations force me to use Axon with state-stored Aggregates in PostgreSQL. I try a simple JPA-Entity Axon-Test and get IllegalStateException.
RESEARCH: A simplified project on the case is available at https://gitlab.com/ZonZonZon/simple-axon.git
In my test on
fixture.givenState(MyAggregate::new)
.when(command)
.expectState(state -> {
System.out.println();
});
I get
The state of this aggregate cannot be retrieved because it has been modified in a Unit of Work that was rolled back
java.lang.IllegalStateException: The state of this aggregate cannot be retrieved because it has been modified in a Unit of Work that was rolled back
at org.axonframework.common.Assert.state(Assert.java:44)
QUESTION: How to test an aggregate state using Axon and escape the error?
there are some missing parts in your project to let the test run properly. I will try to tackle them as concisely as possible:
your Command should contain the piece of information that connects it to the Aggregate. #TargetAggregateIdentifier is the annotation provided by the framework that connects a certain field to its #AggregateIdentifier counterpart into your Aggregate. You can read more here https://docs.axoniq.io/reference-guide/implementing-domain-logic/command-handling/aggregate#handling-commands-in-an-aggregate.
Said so, a UUID field needs to be added to your Create command.
This information will be then passed into the Created event : events are stored and can be processed both by a replay or an Aggregate re-hydration (upon client’s restart). (These) are the source of truth for our information.
#EventSourcingHandler annotated method will be responsible for applying the event and updating the #Aggregate values
public void on(Created event) {
uuid = event.getUuid();
login = event.getLogin();
password = event.getPassword();
token = event.getToken();
}
the test will then look like
public void a_VideochatAccount_Created_ToHaveData() {
Create command = Create.builder()
.uuid(UUID.randomUUID())
.login("123")
.password("333")
.token("d00a1f49-9e37-4976-83ae-114726938c73")
.build();
Created expectedEvent = Created.builder()
.uuid(command.getUuid())
.login(command.getLogin())
.password(command.getPassword())
.token(command.getToken())
.build();
fixture.givenNoPriorActivity()
.when(command)
.expectEvents(expectedEvent);
}
This test will validate your Command Part of your CQRS.
I will then suggest to separate the Query Part from your #Aggregate: you will then need to handle events with #EventHandler annotation placed on a method into a Projection #Component class, and implement the piece of logic that will take care of storing the information in the form that you need into PostgreSQL #Entity, using the #Repository JPA way, which I am sure you are familiar with.
You can find useful information on the ref guide https://docs.axoniq.io/reference-guide/implementing-domain-logic/event-handling following the video example on The Query Model based on code that you can be found in this repo https://github.com/AxonIQ/food-ordering-demo/tree/master
Hope that all is clear,
Corrado.

Returning an object from a Spring Batch job / processor

I have a Spring Batch job that reads in a very large fixed length file and maps it just fine to an object. Validated all of the data in the associated processing task.
Being rather new to Spring and Spring Batch I am wondering if it is possible to get out of the job, a fully populated object to be used in a particular case when I am running the job as part of another process ( that I would like to have access to the data).
I realize that I could do the above without Batch, and it seems to be designed with scope limitations for its purpose.
I could serialize the objects in the processor and go that route but for my immediate satisfaction I am hoping there is a way to get around this.
Thanks
In my #configuration class for the batch processing, I created a class variable (it is a list of the object I want to get back) and instantiated with the no arg constructor.
My Step, ItemReader, LineMapper are setup to use a list for input. The custom FieldSetMapper takes that list instantiated from the constructor as a parameter and adds to the list as the file is read and mapped. Similarly my custom ItemProcessor takes the list as input and returns it.
Finally I created a ReturnObjectList bean that returns the populated list.
In my main I cast the AnnotationConfigApplicationContext getbean to the list of that object type. I am now able to use the list of objects generated from the fixed file in the scope of my main application.
Not sure if this is a healthy work around in terms of how Spring Java config is supposed to work, but it does give me what I need.

Passing parameters from one job to another in Spring batch framework

How do i pass values which are fetched from database in one job as parameters to another job in Spring framework?PLese provide a example code.
I'm guessing by jobs you mean scheduled-tasks. I was in the same situation where 2 jobs were dependent on one another eg:
<task:scheduled-tasks>
<task:scheduled ref="deleteExpiredAccountsJob" method="launch" fixed-delay="100000" />
</task:scheduled-tasks>
<task:scheduled-tasks>
<task:scheduled ref="emailDeletedAccountsConfirmationJob" method="launch" fixed-delay="100000" />
</task:scheduled-tasks>
What I did was to set a DELETE flag on the ACCOUNTS table to true and have the emailDeletedAccountsConfirmationJob read only the ACCOUNTS with DELETE = true
Another solution, if you want to share data between ItemReaders would be to be to have a Java Spring configuration class "#Configuration" and declare your ItemReaders in this class with #Bean annotation and also have a private synchronized Map or List which will be shared between all the threads/jobs or your spring batch steps and you access the synchronized list in your readers.
#Configuration
public class MySpringConfig {
private List<String> list = Collections.synchronizedList(new ArrayList<String>());
#Bean
#Scope("step")
public JdbcCursorItemReader readerOne(){
//add data to list
}
#Bean
#Scope("step")
public JdbcCursorItemReader readerTwo(){
//check for certain data in the list, if exists ... do something
}
in Spring Batch Jobs are considered distinct use cases with no conceptual relationship imposed by the framework. you could see a job as a standalone processing application.
a typical batch system consists of data sources and data sinks such as a message queue, database or file system, so one job is producing data, that is intended to be process by another part of your application.
you may rework your design to use tasks instead of jobs. this is useful if you have several interdependent process steps. there is a way to access the JobExcecution from StepExecution.
I recommend reading the excellent articles as well as "Spring Batch in Action".

Resources