How to disconnect from rabbitmq after each test? - spring

I have integrated with rabbitmq in a spring application. There are two SpringRunner tests which assert on whether the amqp receiver receives a message. The tests connect to a rabbitmq broker running in a separate process.
The problem is that the application context loads on first test and registers a consumer to the queue, but does not disconnect after the test completes.
When the second test runs, the application context for it also registers a consumer, but any messages sent to the exchange as a part of the second test still go to the consumer registered by application context from the first test.
Both the tests run sequentially.
Is there a way to kill the first context completely before the second test starts so that there is just one consumer at a time ? Or any other way to solve the problem ?
Thank you
Tried #DirtiesContext before test did not help

Well, to be honest a #DirtiesContext on all the test classes level, alongside with the #RunWith(SpringRunner.class), is the way to.
The ListenerContainer is an active component which starts its own threads, so even when you are done with your test, it doesn't mean that background thread is stopped. For this purpose you indeed have to use a #DirtiesContext on every test class to ensure that all the application contexts are closed after finishing tests. It ensures that those listener containers are stopped as well.
It is just not enough to place a #DirtiesContext on your one test class, because there is no guarantee in which order they are going to be called. So, present it as much as possible on your test classes to avoid this or similar race conditions.

Related

How to get notified in SpringBoot about an initiated shutdown

since its 2.3 release, SpringBoot has the feature called "graceful shutdown". When a certain signal is received by the JVM, SpringBoot executes some code (because it registers a callback within the JVM) so that the application is shut down gracefully.
In the course of the shutdown process, the application context is closed, traffic is not accepted anymore, beans get destroyed etc. This is described in this section of the SpringBoot docs and elsewhere.
My question is: is it possible to get notified about the fact that SpringBoot has initiated a shutdown? By "get notified" I mean (ideally) this: I want to have a bean which would be part of the application context. This bean should get a notification (one of its methods should be called or via events) when the shutdown is initiated but before any actions are taken in that direction. I.e. #PreDestroy and other lifecycle callbacks are too late IMO. My desired notification should come earlier (when the traffic is still accepted).
I could of course register my own callback within the JVM but it wouldn't look good and also I could not be sure whether my callback is executed before or after the SpringBoot's one.
Thank you for any hints.

How to Prevent Message Stealing in SpringBoot Integration Tests with RabbitMQ Testcontainer?

Imagine the following setup:
A springBoot Applicaiton with a RabbitListener to a queue
Two SpringBoot integration tests with RabbitMQ testcontainer
Test A does some arbitrary test cases which require RabbitMQ
Test B has a slightly different test configuration and sends a message from the test code to the application via RabbitMQ and expects something to happen.
Because of the different configuration, Test A will have a different (and cached) spring context than Test B.
When Test A runs, a Rabbit MQ consumer is started. After the test, the Spring context is cached and the rabbit consumer is still running.
When then Test B runs, a second Rabbit MQ consumer is started in the second spring context. When now the test sends a message, sometimes the consumer from context A "steals" the message and thus the test B fails.
I could only "fix" that but running all integration tests with RabbitMQ with #DirtiesContext but that make the test suite very very slow.
Is there any other thing that can be done to prevent that? Is there some kind of spring context "pause" that can be called to stop the consumers temporarily until they are resumed when the context is reused?

JUnit and jms - events fired are not consumed in the middle of test

We are using spring with junit , jms (activemq) and mySql.
We would like to create some junit tests that after their executions the db will rollback.
In order to achieve that we are using the #Transactional annotation for each tests.
Problem is, one of our tests is calling a service that sends a jms message, (in the middle of the test) the thing is the event is being consumed only after the tests ends (end of transaction maybe?)
thats why the assertion in the end of the test fails.
any ideas why the event is not being consumed right away (p.s we tried to use sleep in order to let the event be consumed, its not working)
Firstly, this is not a unit test ... that's fine ... There are 2 reasons message is not consumed:
The transaction in which the message is sent is not committed. Due to this the lock on the message on the server side is not released.
There is a timing issue and the test ends too soon and doesn't wait got the callback.
That's the whole point of transactions. The message is not available for consumption until the transaction commits (otherwise how could you roll it back if someone's already seen it?). You can do the send in a new transaction (Propagation.REQUIRES_NEW) if you don't want the send to be part of the encompassing transaction.

How to launch a long running Java EE job?

I need to fire off a long running batch type job, and by long we are talking about a job that can take a couple of hours. The ejb that has the logic to run this long running job will communicate to a NoSQL store and load data etc.
So, I am using JMS MDBs to do this asynchronously. However, as each job can potentially take up to an hour or more (lets assume 4 hours max), I dont want the onMessage() method in the MDB to be waiting for so long. So I was thinking of firing off an asynchronous ejb within the onMessage() MDB method so that the MDB can be returned to the pool right after the call to the batch ejb runner.
Does it make sense to combine an asynchrous ejb method call withing an MDB? Most samples suggest using 1 or the other to achieve the same thing.
If the ejb to be invoked from the MDB is not asynchrous then the MDB will be waiting for potentially long time.
Please advise.
I would simplify things: use #Schedule to invoke #Asynchronous and forget about JMS. One less thing that can go wrong.
Whilst not yet ready for prime time, JSR 352: Batch Applications looks very promising for this sort of stuff.
https://blogs.oracle.com/arungupta/entry/batch_applications_in_java_ee
It's a matter of taste I guess.
If you have a thread from the JMS pool running your job or if you have an async ejb do it, the end result will be the same - a thread will be blocked from some pool.
It is nothing wrong with spawning an async bean from a MDB, since you might want to have the jobs triggered by a messaging interface, but you might not want to block the thread pool. Also, consider that a transaction often time out by default way before an hour, so if you do MDB transactional by some reason, you might want to consider fire of that async ejb inside the onMessage.
I think Petter answers most of the question. If you are only using mdb to get asynch behaviour, you could just fire the #Asynchronous asap.
But if you are interested in any of the other features your JMS implementation might offer in terms reliability, persistent queues, slow consumer policies, priority on jobs you should stick to mdb:s
One of the reasons behind introducing #Asynchronous in ejb 3.1 is to provide a more lightweight way to do asynchronous processing when the other JMS/MDB features are not needed.

Understanding JMS integration testing with Spring SingleConnectionFactory and CachingConnectionFactory

Please some help understanding the following:
I am using CachingConnectionFactory in my app and first used it during my jms tests to test my jms config like guaranteed delivery, rollback/commit, etc..
I am using Spring's JmsTemplate for sending and DefaultMessageListenerContainer during delivery.
I noticed that this is hard/impossible when using several test methods run sequential
Example: in test method A I throw exceptions in the Message listener (consumer side) such that retries occur.
Then test B is run and in method A I do a different test, but when I start this test I still get retry messages from test A, which I clearly not want.
I purge the Queue through jmx between tests, but still receive these retries :(...
I searched and debugged... I don't exactly understand why these retries keep comming up, even when I am sure that the purge occur correctly. Maybe it was already cached somewhere in the session... I don't know. Anybody any idea?
I found out that I needed to use the SingleConnectionFactory during testing. With this connection factory the retries disappear, but I don't really understand why. Why?
I understand that it uses only one connection (from the Spring ref), and noticed that it somehow removes the consumer after every send action, but I don't really understand what happen with these retries :(... Any idea?
(It's hard to debug because of the multi threading behavior and difficult to find good information about it on the web)
Also using CachingConnectionFactory with only one session cache size of 1 didn't solve the retry issue.
Thanks
Best bet would probably to use an embedded broker and start/stop it between each test, make sure deleteAllMessagesOnStartup is set to true and the broker should purge the store fore you, which will ensure you've got a clean slate for each test. You might also benefit from having a look at ActiveMQ's unit tests, it's a good source of examples of how the broker can be used in automated tests.
It's not an easy thing to fix: remove the messages between tests.
I tried many thingssss, like mentioned above: stop/start the broker and the class DefaultMessageListenerContainer of Spring that I use to consume my messages.
It all seem to work until I turned I set the cache level in DefaultMessageListenerContainer to Consumer such that the consumer is cached.
That is required such that the redeliveryPolicy works.
However, this messed up everything and messages where cached by DefaultMessageListenerContainer in some way, as it seemed.
At the end, I solved it by simple consuming all messages after a test (just wait a second and consume all Ok), such that the next test can begin.

Resources