Understanding JMS integration testing with Spring SingleConnectionFactory and CachingConnectionFactory - spring

Please some help understanding the following:
I am using CachingConnectionFactory in my app and first used it during my jms tests to test my jms config like guaranteed delivery, rollback/commit, etc..
I am using Spring's JmsTemplate for sending and DefaultMessageListenerContainer during delivery.
I noticed that this is hard/impossible when using several test methods run sequential
Example: in test method A I throw exceptions in the Message listener (consumer side) such that retries occur.
Then test B is run and in method A I do a different test, but when I start this test I still get retry messages from test A, which I clearly not want.
I purge the Queue through jmx between tests, but still receive these retries :(...
I searched and debugged... I don't exactly understand why these retries keep comming up, even when I am sure that the purge occur correctly. Maybe it was already cached somewhere in the session... I don't know. Anybody any idea?
I found out that I needed to use the SingleConnectionFactory during testing. With this connection factory the retries disappear, but I don't really understand why. Why?
I understand that it uses only one connection (from the Spring ref), and noticed that it somehow removes the consumer after every send action, but I don't really understand what happen with these retries :(... Any idea?
(It's hard to debug because of the multi threading behavior and difficult to find good information about it on the web)
Also using CachingConnectionFactory with only one session cache size of 1 didn't solve the retry issue.
Thanks

Best bet would probably to use an embedded broker and start/stop it between each test, make sure deleteAllMessagesOnStartup is set to true and the broker should purge the store fore you, which will ensure you've got a clean slate for each test. You might also benefit from having a look at ActiveMQ's unit tests, it's a good source of examples of how the broker can be used in automated tests.

It's not an easy thing to fix: remove the messages between tests.
I tried many thingssss, like mentioned above: stop/start the broker and the class DefaultMessageListenerContainer of Spring that I use to consume my messages.
It all seem to work until I turned I set the cache level in DefaultMessageListenerContainer to Consumer such that the consumer is cached.
That is required such that the redeliveryPolicy works.
However, this messed up everything and messages where cached by DefaultMessageListenerContainer in some way, as it seemed.
At the end, I solved it by simple consuming all messages after a test (just wait a second and consume all Ok), such that the next test can begin.

Related

How should you handle the retry of sending a JMS message from your application to ActiveMQ if the ActiveMQ server is down?

So using JMS and ActiveMQ, I can be sure that my message sent from my Spring Boot application using JmsTemplate will reach it's destination application even if that destination application is down at the time I send the message to ActiveMQ. As when the destination application starts up, it grabs the message from the queue. Great!
However.
What happens if my Spring Boot application tries to send a JMS message to a queue on the ActiveMQ server, but the ActiveMQ server is down at that point or the network is down and I get a connection refused exception?
What is the recommended way to make sure my application keeps trying to re-sends the message to ActiveMQ until it is successful? Is this something I have to develop into my application myself? Are there any nifty Spring tools or annotations which do this for me? Any advice on best practice or how I should be handling this scenario?
You can try Spring-Retry. Has lots of fine grain controls for it:
http://www.baeldung.com/spring-retry
https://github.com/spring-projects/spring-retry
If it is critical that you don't lose this message, you will want to save it to some alternative persistent store (e.g. filesystem, local mq server) along with whatever retry code you come up with. But for those occasional network glitches or a very temporary mq shutdown/restart, Spring-Retry alone should do the trick.
Couple of approaches I can think of
1. You can set up another ActiveMq as fallback. In your code you don't have to do anything, just change your broker url from
activemq.broker.url=tcp://amq01.blah.blah.com:61616
to
activemq.broker.url=failover:(tcp://amq01.blah.blah.com:61616,tcp://amq02.blah.blah.com:61616)?randomize=false
The rest is automatically taken care of. i.e. when one of them is down, the messages are sent to other.
Another approach is to send to a internal queue (like seda, direct) when activemq is down and read from there.
Adding failover to the url is one appropriate way.
And another reasonable way is to making sure activemq always online , as activemq has the master-slave mode(http://activemq.apache.org/masterslave.html) to get high availability.

Wildfly 10 jms send Message to queue as part of XA transaction

I have recently had to support a colleague in verifying why some system tests are not passing in wildfly, system tests that pass consistently on weblogic and glass fish.
After analysing the log, it became clear the reason is related to a JMS message sent by a backed thread getting committed to a queue too soon, when the expectation was the message would be committed when the entry point Container Managed Transaction of an MDB commits. So the message is going out before the MDB that sends it is done running.
In weblogic, to achieve the expected behaviour, you need to make sure that when you take the connection factory given by the container , which is XA configured, you set the connection.createseesion with
transacted = true and
acknowledgement = session transacted.
In a process similar to the one depicted in this URL
http://www.mastertheboss.com/jboss-server/jboss-jms/sending-jms-messages-over-xa-with-wildfly-jboss-as
Except in the snippet above auto acknowledge is set and the first parameter is set to false.
In wildly when our weblogic and glass fish configuration is used, nothing is committed and the system behaves as if the JMS message sent were to be rolled back.
If configuration as in the example above were to be used, instead what would happen is that the JMS message is immediately and the consumer MDB immediately launches being trigerred before the producer transaction actually ends, causing the system test to fail.
According to the official JMS configuration, by using a connection-pooled factory with the transaction=XA attribute, the container should immediately bind the commit of the transaction to the lifecycle of the parent transaction.
See official documentation bellow in particular in respect to the Java:/JmsXa connection factory.
https://docs.jboss.org/author/display/WFLY10/Messaging+configuration
My colleague was initially using a non pooled connection factory, but the injection info reference has since then been fixed. I have tried all possible combinations of parameters in the shed message, but my outcome is sitll:
Either sent too soon or never sent.
To conclude all the other resources are XA. Namely the oracle db is using the XA driver.
Can anyone confirm if in wildly the send JMS message only when parent transaction commits is working and if so how the session is being configured?
I will check if perhaps my colleague has not made a mistake in terms of the configuration of the connection factory used by the Men's themselves to consume messages out of the queue.but if that one is also XA... Then it is a big problem.
So the issues is fixed.
The commit of the JMS message to the queue at the end of the transaction works perfectly.
The issue was two fold:
(a) First spot of code I was looking at address the issue was not correct. Someone had decided to write his own send telegram to queue API elsewhere, and was not using the central API for sneding telegrams, so any modification I to the injection connection factory was actually not taking effect. The stale connection factories were still being used.
(b) Once the correct API was spotted it was easy to make the mechanism work by using the widlfy XA pooled connection factory mentioned in the post above.
The one thing that was tweaked was the connection.CreationSession api.
The API in JEE 7 has been enlarged and it is now better documented than in jEE 6.
To send a JMS message in a container as part of an XA transaction one should do:
connection.createSession() without any parameters.
This can easily be seen in the connection javadoc:
https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html
QUOTE 1:
This method has been superseded by the method createSession(int
sessionMode) which specifies the same information using a single
argument, and by the method createSession() which is for use in a Java
EE JTA transaction. Applications should consider using those methods
instead of this one.
QUOTE 2:
In a Java EE web or EJB container, when there is an active JTA
transaction in progress:
Both arguments transacted and acknowledgeMode are ignored. The session will participate in the JTA transaction and will be committed
or rolled back when that transaction is committed or rolled back, not
by calling the session's commit or rollback methods. Since both
arguments are ignored, developers are recommended to use
createSession(), which has no arguments, instead of this method.
Which means, the code snippet in:
http://www.mastertheboss.com/jboss-server/jboss-jms/sending-jms-messages-over-xa-with-wildfly-jboss-as
Is not appropriate. What one should be doing is creating the session without any parameter and leting the container handle the rest.
Which it does just fine.

Spring Integration : QueueChannel guarantee no data loss?

I want my system to guarantee there is no data loss even if the system is shutting down.
What this mean is that the system must not miss the request message. So, I will change the way that accept http reqeust. Now, I am using http gateway/webservice gateway in spring integration. But, This isn't receive the message even if the system dies. So, I want to add the queue between the http client and the http receiver. So, I want to use a queue channel. Here is the question.
① I have to install other queue program such as activemq or rabbitmq and have to connect to the queue channel in spring integration?
② and which one is the best combination with spring integration? I heard that rabbit mq is the best one.
please give me a elaborate explanation. thanks.
First of all you description isn't clear...
If you don't want to lose messages from the QueueChannel use some Persistence MessageStore, like JdbcChannelMessageStore:
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/system-management-chapter.html#message-store
From other side there are channel wrappers for the AMQP as well as for JMS:
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/amqp.html#d4e5846
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/jms.html#jms-channel
Which really provide the same persistence durability, fault tollerant options for your use-case.
Re. activemq VS rabbitmq. I can say by my own expiriance that the last one is better, by configuration, usage from Spring Integration (Spring AMQP is under the shell). And its performance is really better.
All other info you can find in the Internet.

Recover connection when using Spring JMSTemplate and ApacheMQ

//OVERVIEW//
I have a Java Swing Client that is pushing messages to a broker. For the producer, I am using the Spring SingleConnectionFactory:
org.springframework.jms.connection.SingleConnectionFactory
that is wrapped around an ActiveMQConnectionFactory:
org.apache.activemq.ActiveMQConnectionFactory
I am using the Spring JMSTemplate:
org.springframework.jms.core.JmsTemplate
to provide the mechanisms to send messages from the producer to the broker.
// PROBLEM //
Sometimes, the broker might go down, or the network might fail. When this happens, the only way I have been able to re-establish connection to the broker is to re-start the Swing application (producer) to re-initialize the components mentioned above.
Does anyone know how this might be done at run-time? Atempting to re-initialise beans at runtime sounds like a hack and I was wondering if there was a more elegant configuration option.
Thanks

How to launch a long running Java EE job?

I need to fire off a long running batch type job, and by long we are talking about a job that can take a couple of hours. The ejb that has the logic to run this long running job will communicate to a NoSQL store and load data etc.
So, I am using JMS MDBs to do this asynchronously. However, as each job can potentially take up to an hour or more (lets assume 4 hours max), I dont want the onMessage() method in the MDB to be waiting for so long. So I was thinking of firing off an asynchronous ejb within the onMessage() MDB method so that the MDB can be returned to the pool right after the call to the batch ejb runner.
Does it make sense to combine an asynchrous ejb method call withing an MDB? Most samples suggest using 1 or the other to achieve the same thing.
If the ejb to be invoked from the MDB is not asynchrous then the MDB will be waiting for potentially long time.
Please advise.
I would simplify things: use #Schedule to invoke #Asynchronous and forget about JMS. One less thing that can go wrong.
Whilst not yet ready for prime time, JSR 352: Batch Applications looks very promising for this sort of stuff.
https://blogs.oracle.com/arungupta/entry/batch_applications_in_java_ee
It's a matter of taste I guess.
If you have a thread from the JMS pool running your job or if you have an async ejb do it, the end result will be the same - a thread will be blocked from some pool.
It is nothing wrong with spawning an async bean from a MDB, since you might want to have the jobs triggered by a messaging interface, but you might not want to block the thread pool. Also, consider that a transaction often time out by default way before an hour, so if you do MDB transactional by some reason, you might want to consider fire of that async ejb inside the onMessage.
I think Petter answers most of the question. If you are only using mdb to get asynch behaviour, you could just fire the #Asynchronous asap.
But if you are interested in any of the other features your JMS implementation might offer in terms reliability, persistent queues, slow consumer policies, priority on jobs you should stick to mdb:s
One of the reasons behind introducing #Asynchronous in ejb 3.1 is to provide a more lightweight way to do asynchronous processing when the other JMS/MDB features are not needed.

Resources