Reactor Flux with blockLast() Spring Boot integration test never spins up - spring

I have a Kafka processing Spring Boot app that has a method that runs at application startup (using ApplicationRunner) and spins up a Flux that leverages blockLast(), because I found that nothing would happen calling subscribe(), since it doesn't block the main thread, which I discovered can (and will) complete before the Flux emits any elements.
Now that I'm creating integration tests for this method the problem I'm running into is that the context/app never fully spins up, so my test code is never executed, the app startup just reaches a certain point and hangs forever. When I change blockLast() to subscribe() the test code runs (though I'm not sure if the Flux code it tests runs correctly this way, as so far I just have a trivial dummy test) and I can see that my primary method with the Flux code is executed. Anyone have any ideas for how I can create integration tests in this scenario?

I had a similar problem, and I circumvented it by dispatching the polling task to an executor service as:
#PostConstruct
public void init() {
this.executorService.submit(() -> pollingTask());
}

Related

#SpringBootTest loads application context and runs indefinitely without running test case code

I'm trying to develop library and write integration tests using #SpringBootTest.
I'm supplying custom #SpringBootApplication class and when i trigger tests, testcase will start running, spring context loads(banner, hibernete logs) and stucks forever. it doesn't come back to test case code to run it. I've enabled debug logs but nothing useful found. not sure where it's going wrong

Can we invoke a method in spring in case the application start fails

I have situation where I need to perform certain tasks in case my springboot applications fails to start. Basically I want to release various resources. I tried using #PreDestroy annotation but it did not work as application was not started yet. Is there any way out by which we can perform few actions in case springboot application fails to start
Spring app heavily using threadpool context when start the program , so when the main program fail spring can not manage the standard beans related to spring. you can only start new thread using implements Runnable in main class and no access to spring resources as well , just simple getclassloader().getresourceasstream is available there .
However you can write independence java Agent using -javaagent to do some operation on release resource ,see java.lang.instrument.Instrumentation;

Spring batch - stop job running in different JRE

Background:
I created a Stop Job which finds running jobs with the specified name as this:
jobExplorer.findRunningJobExecutions("job_A")
and then, for each execution of job_A it calls:
jobOperator.stop(execution.getId());
Issue
when i call the above stop() method; even though it eventually accomplishes what i want, but it still throws an exception:
WARN o.s.b.c.l.s.SimpleJobOperator [main] Cannot find Job object
org.springframework.batch.core.launch.NoSuchJobException: No job configuration with the name [job_A] was registered
at org.springframework.batch.core.configuration.support.MapJobRegistry.getJob(MapJobRegistry.java:66) ~[spring-batch-core-3.0.6.RELEASE.jar:3.0.6.RELEASE]
at org.springframework.batch.core.launch.support.SimpleJobOperator.stop(SimpleJobOperator.java:403) [spring-batch-core-3.0.6.RELEASE.jar:3.0.6.RELEASE]
The Cause
This happens when the stop() method tries to located job_A in the JobRegistry.
So even though job_A was found in the "JobRepository" because the repository looks in the database, it was not found in the "JobRegistry" which is a local cache of job beans created within its runtime environment, since job_A is running withing a different runtime instance it was not registered and threw and error.
Concern
Even though job 'A' stops i am still concerned what i have missed because of the exception.
I have searched this issue and found only general answers on how to stop a job, however i did not find anyone explaining how to stop a running job of another runtime.
Any answers would be greatly appreciated.
The JobOperator isn't intended to orchestrate distributed batch environments like you're attempting to do. You really have two options:
Use the JobRepository directly - The part that causes the job to stop successfully in the remote JVM is that the JobRepository is updated and the running job in the other JVM knows to check that periodically. Instead of using the JobOperator to accomplish this, just use the JobRepository directly to make the update.
Use a more robust orchestration tool like Spring Cloud Data Flow - This kind of orchestration (deploying, starting, stopping, etc) for jobs (via Spring Cloud Task) is what Spring Cloud Data Flow is for.
You can read more about Spring Cloud Data Flow at the website: https://cloud.spring.io/spring-cloud-dataflow/
In addition to what Michael mentioned you can solve this by adding some interface to the application allowing you to pass commands to start or stop your job. Something like a webservice. Exposing an end-point to stop it. Now the catch in this case is handling in clustered system may be a bit tricky.

Spring #Async - no data found in integration test

I'm trying to unit (integration) test a method annotated with Spring's #Async.
The test sets up some data in in-memory h2 database, then runs the asynchronous method. Asynchronous code does not see test data :O
Removing #Async fixes the problem.
Any help? :)
I had the same error. The solution was quite simple for me: I did not put a COMMIT; to the end of my data_init-h2.sql
I presume you did not put this either. If you think about it this is quite logical. Your main thread fires up a transaction but does not actually commit it to h2. Spring fires up another thread and the #Async method is run there in a separate transaction.
Because of the lack of commit you do not see the data changes on this other thread. On the main thread you can see your data changes even before they are committed as you are in that transaction.
The transaction isn't propagated like it was before your #Async.
#Async and #Transactional: not working
Your test could commit the data and delete it either side of the test, removing Spring's automated rollback inside test #Transactionals.
You could create a default-access method that the async method calls into, that your test could also call direct, though you would no longer be testing the Async behaviour.
There's likely a nicer spring implementation that supports what you need, making the transaction available but I don't have it.

JTA Callbacks in Spring

Is it possible to register some kind of callback with a JTA transaction in a Spring application?
I've got some mock services that are standing in for remote services that belong to another application that's normally accessed using Spring's HttpInvoker. These mock services model data in-memory in a trivial fashion using Maps and the like.
The Unit tests don't necessarily know which of these services might get used; the service the test case is targetting might use them behind the scenes.
The unit tests are transactional, and Spring's SpringJUnit4ClassRunner will rollback the transaction after each test, meaning that the state fo our unit test database is preserved between tests.
How can I rollback the state of this custom in-memory service implementation? If there was a way of finding out if there's a transaction currently going on, then I was hoping there'd be a way of registering a callback with the TransactionManager to be executed before the transaction is completed.
I don't think it's a good idea to clean up test mock in such an implicit way - tests usually perform cleanup explicitly.
However, if you really want, take a look at TransactionSynchronizationManager.registerSynchronization().

Resources