I have an application that consumes messages from a JMS topic. As part of the normal application flow it needs to periodically cease consumption of messages. While the application is in this state new messages are stored in the topic (note that my application is still running). Later the application resumes message consumption, also receiving those messages that were placed on the topic while the application wasn't listening.
This functionality is currently achieved by creating and disposing of connections from a ConnectionFactory. However, I now wish to migrate the application to Spring JMS. Although Spring rather neatly abstracts away much of the JMS boiler-plate - I no longer appear to have fine grained control over the underlying connection and hence cannot halt message consumption on demand.
Before I try to wade through Spring JMS internals, can anyone suggest a neat way of doing this?
Can you just avoid returning from onMessage()? How long do you want to stop consumption? Is your problem similar to https://stackoverflow.com/a/628337/20734
Related
Spring + Apache Kafka noob here. I'm wondering if its advisable to run a single Spring Boot application that handles both producing messages as well as consuming messages.
A lot of the applications I've seen using Kafka lately usually have one separate application send/emit the message to a Kafka topic, and another one that consumes/processes the message from that topic. For larger applications, I can see a case for separate producer and consumer applications, but what about smaller ones?
For example: I'm a simple app that processes HTTP requests => send requests to a third party service, but to ensure retryability, I put the request on a Kafka queue with a service using the #Retryable annotation?
And what other considerations might come into play since it would be on the Spring framework?
Note: As your question states, what'll say is more of an advice based on my beliefs and experience rather than some absolute truth written in stone.
Your use case seems more like a proxy than an actual application with business logic. You should make sure that making this an asynchronous service makes sense - maybe it's good enough to simply hold the connection until you get a response from the 3p, and let your client handle retries if you get an error - of course, you can also retry until some timeout.
This would avoid common asynchronous issues such as making your client need to poll or have a webhook in order to get a result, or making sure a record still makes sense to be processed after a lot of time has elapsed after an outage or a high consumer lag.
If your client doesn't care about the result as long as it gets done, and you don't expect high-throughput on either side, a single Spring Boot application should be enough for handling both producer and consumer sides - while also keeping it simple.
If you do expect high throughput, I'd look into building a WebFlux based application with the reactor-kafka library - high throughput proxies are an excellent use case for reactive applications.
Another option would be having a simple serverless function that handles the http requests and produces the records, and a standard Spring Boot application to consume them.
TBH, I don't see a use case where having two full-fledged java applications to handle a proxy duty would pay off, unless maybe you have a really sound infrastructure to easily manage them that it doesn't make a difference having two applications instead of one and using more resources is not an issue.
Actually, if you expect really high traffic and a serverless function wouldn't work, or maybe you want to stick to Java-based solutions, then you could have a simple WebFlux-based application to handle the http requests and send the messages, and a standard Spring Boot or another WebFlux application to handle consumption. This way you'd be able to scale up the former in order to accommodate the high traffic, and independently scale the later in correspondence with your performance requirements.
As for the retry part, if you stick to non-reactive Spring Kafka applications, you might want to look into the non-blocking retries feature from Spring Kafka. This will enable your consumer application to process other records while waiting to retry a failed one - the #Retryable approach is deprecated in favor of DefaultErrorHandler and both will block consumption while waiting.
Note that with that you lose ordering guarantees, so use it only if the order the requests are processed is not important.
Sorry this might sound naive to JMS gurus, but still.
I have a requirement where a Spring based application is not able to connect synchronously to a SAP back-end (via their web-service interface) because the response from SAP is way too slow. We are thinking of a solution where the updates from GUI would be saved by the Spring middle-ware in a local database, simultaneously sending a message to a JMS queue. We want that after (say) every few hours (or may be nightly) a batch job runs to consume the message from the JMS queue, and based on the message contents, queries on the local database and sends the result to the SAP web-service.
Is this approach correct? Would I need a batch to trigger the JMS message consumption (because I don't want to consume the message immediately but in a deferred manner and at a pre-decided time)? Is there any way in Spring to implement this gracefully (like Camel)? Appreciate your help.
Spring Batch has a JmsItemReader that can be used in a batch program; an empty queue signals the end of the batch. Spring Cloud Task is built on top of batch and can be used for cloud deployments.
So using JMS and ActiveMQ, I can be sure that my message sent from my Spring Boot application using JmsTemplate will reach it's destination application even if that destination application is down at the time I send the message to ActiveMQ. As when the destination application starts up, it grabs the message from the queue. Great!
However.
What happens if my Spring Boot application tries to send a JMS message to a queue on the ActiveMQ server, but the ActiveMQ server is down at that point or the network is down and I get a connection refused exception?
What is the recommended way to make sure my application keeps trying to re-sends the message to ActiveMQ until it is successful? Is this something I have to develop into my application myself? Are there any nifty Spring tools or annotations which do this for me? Any advice on best practice or how I should be handling this scenario?
You can try Spring-Retry. Has lots of fine grain controls for it:
http://www.baeldung.com/spring-retry
https://github.com/spring-projects/spring-retry
If it is critical that you don't lose this message, you will want to save it to some alternative persistent store (e.g. filesystem, local mq server) along with whatever retry code you come up with. But for those occasional network glitches or a very temporary mq shutdown/restart, Spring-Retry alone should do the trick.
Couple of approaches I can think of
1. You can set up another ActiveMq as fallback. In your code you don't have to do anything, just change your broker url from
activemq.broker.url=tcp://amq01.blah.blah.com:61616
to
activemq.broker.url=failover:(tcp://amq01.blah.blah.com:61616,tcp://amq02.blah.blah.com:61616)?randomize=false
The rest is automatically taken care of. i.e. when one of them is down, the messages are sent to other.
Another approach is to send to a internal queue (like seda, direct) when activemq is down and read from there.
Adding failover to the url is one appropriate way.
And another reasonable way is to making sure activemq always online , as activemq has the master-slave mode(http://activemq.apache.org/masterslave.html) to get high availability.
I have a Spring Boot application that is using Spring Integration. The application pulls messages from a RabbitMQ queue, transforms the data from that message, aggregates 50 transformed messages, put those messages in array and sends them to a RESTful endpoint as JSON. I am seeing memory slowly creeps up until the application crashes.
I ran a profiler on our application and there are instances of VariableLinkedBlockingQueue building up over time. The application seems to clean them up after the application starts, but after some time, the application will just build up these instances. I forced a full garbage collection through the profiler on my application and it cleaned up some instances, but they continue to build up. These instances only go up while messages are being sent to the queue. The prefetch is set to 50.
Why am I seeing these instances build up and how do I fix this?
com.rabbitmq.client.impl.WorkPool utilizes that VariableLinkedBlockingQueue and its logic is based on the register/unregister Channel as a client.
If you close Channels properly, they are unregistered from that pool and therefore their VariableLinkedBlockingQueue is garbage collected.
Not having your application to see and play is very difficult to determine where is the leak.
Spring Integration AMQP support exists already for a while and if there was such a problem do not close channel, we'd know that already.
Right now it looks like you use ConnectionFactory somehow out of the box and don't close channels/connections after usage.
I would like to know if using a JMS in the below scenario is feasible or not.
I am adding a feature of calling an API service which will dispatch the emails to the customer.
So i thought of implementing a JMS in my application where i would put the events or messages in the queue and write a listener in the same application which will process the message and call the rest API service call which will dispatch the message to the customers.
My question was is it good to have a JMS in between the rest call and our application ?
Or should i directly call the rest api to dispatch the messages to the customer ?
I think that depends on the availability and overhead of your rest service.
If you know there will be times that your service will be down, but don't want to impact the process using the API, then JMS queues make since.
Or if you feel the rest service is causing a bottle neck from the API service side and want to queue up the messages somewhere where they can survive an outage of your own, JMS with a provider that supports persistent messages makes since in this case.
Using JMS would also open the door for completely decoupling the two. Whatever application hosts the rest service could just as easily be converted to pull messages from the JMS queue without a need to make a rest call if that seemed more efficient.
Just a few examples of how you could justify using JMS in this scenario.