JMSTemplate and DefaultMessageListenerContainer - spring

I have 2 queues in Redhat ActiveMQ, one is used for consuming and other one for both producing and consuming object messages.
Once consumed from the main queue it is pushed to 2nd queue for further processing, However while using JmsTemplate the messages are getting lost randomly,
I am using same Bean with ActiveMQConnectionFactory on 2 of the DMLC container and with the JmsTemplate
Let me know how to ensure that messages are not getting lost in JmsTemplate.

I would double check that nobody else check for messages on the queues you have. If there is some sort of development environment where you have several instances of the application running - they might compete for the messages. It could be another developer launching another instance of the app using the same connection string to the ActiveMQ, or dev/stage environments.

Related

Integration with external systems over JMS. Clustered environment

I have an application where I created 2 message listener containers for external system A which listens two queues respectively.
Also I have 1 message listener container which running and listening another queue of external system B. I am using spring DefaultMessageListenerContainer.
My application is running on clustered environment, while defining my message listener container I injected to it my listener which implements javax MessageListener interface and acts as kind of MDB.
So my questions are:
Is it normal to have instance of message listener container per queue?
Will my message driven pojo (MDP) execute onMessage() on each application node?
If yes, how can I avoid it? I want each message to be consumed once on some of the application nodes.
What is default behavior of DefaultMessageListenerContainer, message is acknowledged as soon as I reached onMessage or after I finished execution of onMessage? Or maybe I need to acknowledge it manually?
See the spring framework JMS documentation and the JMS specification.
Yes it is normal - a container can only listen to one destination.
It depends on the destination type; for a topic, each instance will get a copy of the message; for a queue, multiple listeners (consumers) will compete for messages. This has nothing to do with Spring, it's the way JMS works.
See #2.
With the DMLC, it is acknowledged immediately before calling the container; set sessionTransacted = true so the ack is not committed until the listener exits. With a SimpleMessageListenerContainer, the message is ack'd when the listener exits. See the Javadocs for the DMLC and SMLC (as well as the abstract classes they subclass) for the differences.

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

How does a Topic Message Driven Bean behave in Websphere 8.5.5 Cluster Environment

What I would like is to run a Message Driven Bean that listens onto a (Websphere MQ7) topic. I would like to deploy my application on a Websphere 8.5.5 Cluster containing two cluster members.
If a message for the topic arrives I would expect that only one of my two MDBs gets the message and process it.
IBM states that I should set identically ClientIds and Subscription Names to ensure that only one instance is able to process a message on a Topic:
http://www-01.ibm.com/support/docview.wss?uid=swg21442559
Will the second MDB receive the mentioned MQRC_SUBSCRIPTION_IN_USE Exception, or will the cluster take care that one and only one MDB in the cluster will consume the topic-message?
Maybe someone can point me to the IBM Documentation where this behaviour is defined.
To allow multiple concurrent instances of an MDB to access the same subscription on an MQ queue manager you can enable "Allow cloned durable subscriptions" in the activation spec for the MDB.
https://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/umj_pasm.html
Running like this means both instances of the MDB will start (no IN_USE errors) and each message for that single subscription will be processed by one instance of the MDB. You would use this to workload balance messages across the multiple WAS servers.
This is only true for durable subscriptions. And only when the MDB instances are connected to the same queue manager.

JMS consumer on a different JVM

My application puts in messages in a JMS queue. A bean that implements MDB and MessageListener pops messages from this queue. All this happens on a single JVM .
What I want to do is: I want the MDB and the other instances that it would get from pool for concurrent processing to run on a different JVM. How can I do it? The application server that I am using is JBOSS 4.0.5.GA.
Thanks in advance.
If I understand correctly, you want to split your application into "producer" part (stays in the same server) and "consumer" part (MDBs moved to another server), and still be able to communicate.
In this case you need to configure the ConnectionFactory in the "consumer" server to plug in to the "producer" server's MQ. Have you read this part of JBoss 4.x docs?

Queue consumer clusters with ActiveMQ

How to configure cluster of Consumers in ActiveMQ?
I created a simple embedded ActiveMQ application with two consumers of one Queue, consumers are working in separate threads. But when I send a message to the Queue, JMS delivers it to first consumer no matter how long it sleeps after receiving.
I think you're trying to explain that the first consumer is receiving all the messages. There is a FAQ entry for this type of problem available here:
http://activemq.apache.org/i-do-not-receive-messages-in-my-second-consumer.html
Bruce

Resources