I am calling the following piece to stop the job.
Set<Long> executions = jobOperator.getRunningExecutions("Job-Builder");
jobOperator.stop(longExecutions.iterator().next());
The above method is stopping the job but Tasklets are still running. After reading through comments from stackoverflow I changed tasklet to use StoppableTasklet in my project so that I can stop them from running when I call the joboperator.stop()
My question here is that I am not sure how to identify if a job has been stopped and do necessary action inside the stop overriden method when I implement StoppableTasklet. Can you tell me on how I can do that in the overridden method?
My question here is that I am not sure how to identify if a job has been stopped and do necessary action inside the stop overriden method when I implement StoppableTasklet
You don't need to identify if the job has been stopped in your code. You need to implement the logic to stop the current tasklet and this logic will be called by the framework when you stop the job through JobOperator#stop.
Related
1st Question:
So i am using Spring Eureka and the DistributedCommandBus set via the following:
public CommandRouter springCloudCommandRouter(DiscoveryClient discoveryClient, Registration localServiceInstance) { ... }
public CommandBusConnector springHttpCommandBusConnector(#Qualifier("localSegment") CommandBus localSegment, RestOperations restOperations, Serializer serializer) { .. }
public DistributedCommandBus springCloudDistributedCommandBus(CommandRouter commandRouter, CommandBusConnector commandBusConnector) { ... }
and my question for this part is how can i show that this is working? I have two K8 pods running the above code and have seen one run the #CommandHandler and the other run the #EventSourcingEvent but did not see anything in the logs to give any indication that it is using the bus.
Just want to be able to show that it is "working" as i have been asked to do so.
the Eureka part is working and i see all the info from said dashboard.
Edit - removed 2nd question to ask in another thread
To keep focus with my answer, I'll only provide a suggestion for your first question, which summarizes to:
How can I point out my DistributedCommandBus set up with Eureka is actually routing the commands to different instances?
I would suggest to set up some logging around this.
That way, you could log when the message is dispatched from Node 1 and when it is handled by Node 2.
Ideal for this would be to register the LoggingInterceptor as a MessageHandlerInterceptor and MessageDispatchInterceptor.
To do so, you will have to register it on the DistributedCommandBus, but also on the "local segment" CommandBus. The DistributedCommandBus will be in charge of dispatching it and thus calling the LoggingInterceptor upon dispatching. The local segment/CommandBus is in charge of providing the command to a Command Handler in the right JVM and as such will call the LoggingInterceptor upon handling.
The sole downside to this, is that the LoggingInterceptor will be a handler and dispatch interceptor as off Axon Framework release 4.2.
Thus, for now, you will have to do with it only being a Handler Interceptor.
However, this would suffice as well, as the LoggingInterceptor will only log upon handling the command.
This would then only occur on the Node which actually handles the command.
Hope this helps!
I have a simple Spring Boot application which reads from Kafka and writes to Kafka. I wrote a SpringBootTest using an EmbeddedKafka to test all that.
The main problem is: Sometimes the test fails because the test sends the Kafka message too early. That way, the message is already written to Kafka before the Spring application (or its KafkaListener to be precise) is ready. Since the listener reads from the latest offset (I do not want to change any config for my test - except bootstrap.servers), it will not receive all messages in that test.
Does anyone have an idea how I could know inside the test, that the KafkaListener is ready to receive messages?
Only way I could think of is waiting until /health comes available but I have no idea whether I can be sure that this implies the KafkaListener to be ready at all.
Any help is greatly appreciated!
Best regards.
If you have a KafkaMessageListenerContainer instance, then it is very easy to use org.springframework.kafka.test.utils.ContainerTestUtils.waitForAssignment(Object container, int partitions).
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/test/utils/ContainerTestUtils.html
e.g. calling ContainerTestUtils.waitForAssignment(container, 1); in your Test setup will block until the container has gotten 1 partition assigned.
So, I just read about #PostConstruct and it turns out that you can easily use this also within the test:
#PostConstruct
public void checkApplicationReady() {
applicationReady = true;
}
Now I added an #Before method to wait until that flag is set to true.
So far this seems to work very nicely!
I'm looking for some general opinions and advice on testing a Spring batch step and step execution.
My basic step reads in from an api, processes into an entity object and then writes to a DB. I have tested the happy path, that the step completes successfully. What I now want to do is test the exception handling when data is missing at the processor stage. I could test the processor class in isolation, but I'd rather test the step as a whole to ensure the process failure is reflected correctly at step/job level.
I've read the spring batch testing guidelines and if I'm honest, I'm slightly lost within it. Is it possible to use StepScopeTestUtils.doInStepScope or updating the StepExecution to test this scenario? Ideally I'd force the reader to return faulty data before the processor kicks in.
Any advice would be greatly appreciated.
The best approach depends on the scope of your test. Reading a little between the lines here, I assume you are using a Spring IT, setting up a Spring context and using the JobLauncherTestUtils to start a job or a step.
I think the easiest way is replace one of your beans with a mock that triggers the error scenario. Using Mockito, this can be done by adding something like this to your test-configuration.
#Bean
public ReaderDataRepository dataApi(){
return mock(ReaderDataRepository.class);
}
This bean then overrides the actual implementation. In the test setup you can then configure this mock very explicitly.
#Autowired
private ReaderDataRepository mockedRepository;
#Before
public void setUp() {
when(mockedRepository.getData()).thenReturn(faultyData())
}
This involves very little manipulation of Spring 'magic' and very explicitly defines the error from within the test.
I have a spring batch job that takes a long time to execute. After executing for a while I decided I wanted to stop it but whenever I restart the server the job still continues executing after the server come back up.
I want to know where spring batch saves the state so that I can possibly delete it and stop that from happening.
I found out there are properties that I can configure to not have the job restartable and I will use that going forward but now I just need to make sure the job can stop for good.
You can see the documentation here which shows & describes the spring Batch Meta-Data tables.
if you just want to prevent restart of job below config should help you.
preventRestart() helps to avoid restart of job
https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/core/job/builder/JobBuilderHelper.html
#Bean
public Job myJob(JobBuilderFactory jobs) throws Exception {
return jobs.get("myJob")
.flow(step1())
.build()
.preventRestart()
.build();
}
Is Spring Controller obeys the synchronized process by adding keyword in the method?
When tried to provide synchronize it seems method not blocked and two threads are executing the same method at same time. Checked with Thread.sleep(50000)
Have used #Scope("request")
You'll need to add appropriate synchronization around critical sections in your code.