Currently, we have four JMS listener containers that are started during the application start. They all connect through Apache ZooKeeper and are manually started. This becomes problematic when a connection to ZooKeeper cannot be established. The (Wicket) application cannot start, even though it is not necessary for the JMS listeners be active to use the application. They simply need to listen to messages in the background, save them and a cron job will process them in batches.
Goals:
Allow the application to start and not be prevented by the message containers not being able to connect.
After the application starts, start the message listeners.
If the connection to one or any of the message listeners goes down, it should attempt to automatically reconnect.
On application shutdown (such as the Tomcat being shutdown), the application should stop the message listeners and the cron job that processes the saved messages.
Make all of this testable (as in, be able to write integration tests for this setup).
Current Setup:
Spring Boot 1.5.6
Apache ZooKeeper 3.4.6
Apache ActiveMQ 5.7
Wicket 7.7.0
Work done so far:
Define a class that implements ApplicationListener<ApplicationReadyEvent>.
Setting the autoStart property of the DefaultMessageListenerContainer to false and start each container in the onApplicationEvent in a separate thread.
Questions:
Is it necessary to start each message container in its own thread? This seems to be overkill, but the way the "start" process works is that the DefaultMessageListenerContainer is built for that listener and then it is started. There is a UI component that a user can use to start/stop the message listeners if need be, and if these are started sequentially in one thread, then the latter three message containers could be null if the first one has yet to connect on startup.
How do I accomplish goals 4 and 5?
Of course, any commments on whether I am on the right track would be helpful.
If you do not start them in a custom thread then the whole application cannot be fully started. It is not just Wicket, but the Servlet container won't change the application state from STARTING to STARTED due to the blocking request to ZooKeeper.
Another option is to use a non-blocking request to ZooKeeper but this is done by the JMS client (ActiveMQ), so you need to check whether this is supported in their docs (both ActiveMQ and ZooKeeper). I haven't used those in several years, so I cannot help you more.
Related
I'm currently trying to refactor the processing of JMS messages to work in a distributed/cloud environment. To allow a better retry and error handling the messages are first stored to the database with a JPA entity and then read by spring integration jpa inbound adapter. This works fine as long as just a single instance of my service is running. However when multiple instances are running, the instances try to process the same message even after introducing a processing state on the persisted messages.
I have already tried to save the JMS messages in a JDBC message store, however then I would have to define a group identifier according to which an instance could select a message which is not really possible since the number of instances is dynamic and I can not assign a group id for each instance. Another possibility could be some kind of distributed lock with a LockRegistry but I couldn't make that work.
Do you have any hint/advice how I could implement the following requirements the best with spring integration:
JMS message should be persisted
Any instance can pick up the message and process it
If the processing fails there will be a retry for x times (could also be retried by another instance)
If an instance crashes or gets killed during the processing the message must not be lost
Is there maybe some spring-cloud component which could be helpful?
I'm happy about every hint in which direction I should go.
I want to use Spring Batch remote partitioning to handle large workloads on the cloud, and spin up/shutdown VMs on demand.
However, when configuring the slave steps, I'm using the StepExecutionRequestHandler to handle the step requests from a JMS queue. Right now the application just hangs. How can I shut down the application after the queue is depleted?
How can I shut down the application after the queue is depleted?
In a remote partitioning setup, workers are listeners on a queue on which StepExecutionRequests are coming. The question is how to know, from the listener point of view, that the queue is depleted? This is a tricky design problem. There are some known solutions like the "End-Of-Stream" message or "Poison" record but those are tricky too since you have to make sure all listeners get one such message.
If you are using Spring Cloud Task to launch your workers, you can use the DeployerPartitionHandler which provides an elegant way to dynamically create workers on demand up to a maximum configurable number. You can find more details about it here: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#batch-partitioning and an example in this github repo: https://github.com/mminella/scaling-demos/blob/master/partitioned-demo/src/main/java/io/spring/batch/partitiondemo/configuration/BatchConfiguration.java#L75
The ice on the cake is that this is based on Spring Cloud Deployer which means you can use it on any cloud provider that implements the SCD SPI. Here is how to do it for:
Kubernetes: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#_notes_on_developing_a_batch_partitioned_application_for_the_kubernetes_platform
cloud foundry: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#_notes_on_developing_a_batch_partitioned_application_for_the_cloud_foundry_platform
I am using Spring Integration in my project.
We have a requirement that in case where we will have stop Spring standalone service if database goes down.
In Message listener when I persist the data into database I check if I get CannotGetJdbcConnectionException then stop the Spring service using applicationContext.close() method.
Problem here is if I received any message on to the Queue and database goes down.
I tried to close Spring service then all resource goes down except DefaultMessageListenerContainer that holds that message.
If I terminate the process manually then message goes into inbound Queue which is correct.
Is there any way I could stop Spring service forcefully and put the message back to Inbound Queue?
I hope I am clear with my point here.
Thanks
Sachin
You should configure the DMLC with setSessionTransacted(true) (acknowledge="transacted" when using the namespace to define the endpoints).
Then any in-flight messages will be rolled-back onto the queue.
My application puts in messages in a JMS queue. A bean that implements MDB and MessageListener pops messages from this queue. All this happens on a single JVM .
What I want to do is: I want the MDB and the other instances that it would get from pool for concurrent processing to run on a different JVM. How can I do it? The application server that I am using is JBOSS 4.0.5.GA.
Thanks in advance.
If I understand correctly, you want to split your application into "producer" part (stays in the same server) and "consumer" part (MDBs moved to another server), and still be able to communicate.
In this case you need to configure the ConnectionFactory in the "consumer" server to plug in to the "producer" server's MQ. Have you read this part of JBoss 4.x docs?
To process a large number of messages coming to a queue i need guarantee of at least one jms connection to be there at any time. I am using spring and spring allows to have multiple sessions on a single connection only. In case one and only connection fails, application will come to standstill till spring reconnects to the JMS bridge.
So how can i create more than one connection to a queue in Spring, also how can i do connection pooling here.
The answer to this depends on whether you are using Spring inside a J2EE container(jboss etc.) or in a standalone application.
Standalone - you'll find pooling connections to be a problem. Springs SingleConnectionFactory can be setup to renew the connection on an exception garaunteeing that at some point a connection will come online and start processing the queue again, but you'll still have the problem of waiting for that single connection to renew, plus depending on what messaging implementation your dealing with and how it does load balancing you may find yourself stuck with a connection to a single node in a cluster.
If you are running in a container you can rely on the containers connection factory which will be much more robust. JBoss Messaging in the container for instance will failover seamlessly to other nodes and handles pooling under the covers, but if your working in the container its usually easier to bail on JMS template which kind of sucks and use whatever that container provides.