I have a configuration where at a time 10 messages are coming in parallel to a SQS queue.
To consume it I am using JmsListener.
Let me show you my configuration:
public SQSConnectionFactory sqsConnectionFactory() {
// Create a new connection factory with all defaults (credentials and region) set automatically
return new SQSConnectionFactory(new ProviderConfiguration(),
AmazonSQSClientBuilder.standard().withRegion(Regions.AP_SOUTH_1)
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance()).build());
}
#Bean("jmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(sqsConnectionFactory());
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
}
To use this :
#JmsListener(destination = "queue.fifo",
containerFactory = "jmsListenerContainerFactory")
public void receiveCustomerStakeholderKyc(#Payload final Message<?> message) throws Exception {
}
When I am using this. Some messages are not even coming in the code. JMS is not consuming the messages and those messages are transferred to the dead_queue.
Queues:
1. queue.fifo
Name: queue.fifo
Default Visibility Timeout: 30 seconds
Message Retention Period: 4 days
Maximum Message Size: 256 KB
Created: 2019-09-16 12:50:43 GMT+05:30
Receive Message Wait Time: 0 seconds
Last Updated: 2020-06-12 16:35:29 GMT+05:30
Messages Available (Visible): 0
Delivery Delay: 0 seconds
Messages in Flight (Not Visible): 0
Queue Type: FIFO
Messages Delayed: 0
Content-Based Deduplication: Enabled
2. queue_dead.fifo
Default Visibility Timeout: 30 seconds
Message Retention Period: 4 days
Maximum Message Size: 256 KB
Created: 2019-09-16 12:51:08 GMT+05:30
Receive Message Wait Time: 0 seconds
Last Updated: 2020-06-12 16:47:17 GMT+05:30
Messages Available (Visible): 5
Delivery Delay: 0 seconds
Messages in Flight (Not Visible): 0
Queue Type: FIFO Messages Delayed: 0
Content-Based Deduplication: Disabled
Is there anything I am missing
when I looked at the was console it says, message received at this time but in my logs they are not received.
Is there a way to enable SQS logs ?
I must say I made a silly mistake but that can happen to anyone.
Reason for the mistake
I have a Dev and QA account of AWS. To save money we merge both the accounts but we let SQS in separate accounts.
Way to access SQS queue
I am trying to create the SQS connection factory like this:
public SQSConnectionFactory sqsConnectionFactory() {
// Create a new connection factory with all defaults (credentials and region) set automatically
return new SQSConnectionFactory(new ProviderConfiguration(),
AmazonSQSClientBuilder.standard().withRegion(Regions.AP_SOUTH_1)
.withCredentials(DefaultAWSCredentialsProviderChain.getInstance()).build());
}
In this way, JMS tries to resolve the SQS queue URL based on the AWS access and secret key. Now that we have merged the dev and QA accounts, even QA instance of my application was creating a dev SQS URL.
Solution
I solved it by dynamically resolving the SQS connection with the AWS account ID instead of AWS access and secret keys.
Here is the code to resolve:
Pass the ownerAccountId
public static class CustomDynamicDestinationResolver implements DestinationResolver {
private String ownerAccountId;
public CustomDynamicDestinationResolver(String ownerAccountId) {
this.ownerAccountId = ownerAccountId;
}
#Override
public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException {
Assert.notNull(session, "Session must not be null");
Assert.notNull(destinationName, "Destination name must not be null");
if (pubSubDomain) {
return resolveTopic(session, destinationName);
} else {
return resolveQueue(session, destinationName);
}
}
protected Topic resolveTopic(Session session, String topicName) throws JMSException {
return session.createTopic(topicName);
}
protected Queue resolveQueue(Session session, String queueName) throws JMSException {
Queue queue;
//LOGGER.info("Getting destination for libraryOwnerAccountId: {}, queueName: {}", ownerAccountId, queueName);
if (ownerAccountId != null && session instanceof SQSSession) {
queue = ((SQSSession) session).createQueue(queueName, ownerAccountId);
} else {
queue = session.createQueue(queueName);
}
return queue;
}
}
Related
I'm currently trying to build a messaging application using JMS Listener and IBM MQ, and I need to ensure that I can run two instances of the same listener at the same time. However, I want to make sure that the second instance waits until the first instance has fully processed and acknowledged the message.
I'm using Spring Boot for my application, JMS Listener and IBM MQ.
Below is the my config class which is annotated with #component and #EnableJMS
public class jmsconfig{
#Bean
public MQConnectionfactory getConnectionFactory(){
MQConnectionFactory connectionFactory = new MQConnectionFactory();
connectionFactory.setQueueManagerName("AA");
connectionFactory.setExpirationTimeOut(3600000);
connectionFactory.setPerformExpiration(true);
connectionFactory.setPerformValidation(true);
connectionFactory.setPerformOptimalSizeCheck(false);
connectionFactory.setValidationTimeOut(180000);
connectionFactory.setMinIdle(3);
connectionFactory.setMaxIdle(5);
reurn connectionFactory ;
}
#Bean(name="jmsListenerContainerFactory")
public JmsListenerContainerFactory jmsListenerContainerFactory(MQConnectionfactory mqConnectionfactory){
DefaultJmsListenerContainerFactory containerFactory = new DefaultJmsListenerContainerFactory();
containerFactory.setConnectionFactor(mqConnectionfactory);
containerFactory.setSessionTransacted(true);
containerFactory.setSessionAcknowledgementMode(Session.CLIENT_ACKNOWLEDE)
reurn containerFactory ;
}
}
Listener code:
#Component
public class Receiver {
public sttaic int count = 0;
#JmsListener(destination = "${inwardQueueName}", containerFactory = "jmsListenerContainerFactory")
public void receiveMessage(javax.jms.Message message) throws javax.jms.JMSException {
String messagetxt = "";
OrderObject order = null;
if (message instanceof javax.jms.TextMessage) {
messagetxt = ((TextMessage)message).getText();
OrderObject order = //code to covert messagetxt to object
System.out.println("Message pciked up with Order Id : " +order.getOrderId)
TimeUnit.SECONDS.sleep(15);
count++;
if(count<=4){
throw new Exception("Exception occurred");
}
}
TimeUnit.SECONDS.sleep(15);
System.out.println("Message Acknowledged for orderId: " +order.getOrderId)
message.acknowledge();
}
}
I start the first instance and the push first message to MQ with orderId 1. I can see the below output on the console. It prints the below statement and then waits for 15 seconds
Message picked up with Order Id : 1
Immediately, I start the second instance and the push second message to MQ with orderId 2. I can see the below output on the console.
First Instance picks first message with orderId 2
Message Acknowledged for orderId: 2
After some time, when the first instance completes(since there is a wait of 15 sec), I can see the below output.
Message pciked up with Order Id : 1
Message pciked up with Order Id : 1
Message pciked up with Order Id : 1
Message Acknowledged for orderId: 1
With the above output, the Second instance is picking up second messages from the MQ in parallel and processing them while the first instance is still processing the first message.
Can anyone help with what is wrong with the above implementation?
I have a Spring Boot application, implementing Websocket as well as Redis stream.
The flow is, Subscriber (who subscribes Redis stream) upon receiving message will then send that information into Websocket (using STOMP protocol and AmazonMQ - ActiveMQ as an external message broker).
Example of 1 consumer group
public Subscription fExecutionResponse(RedisTemplate<String, Object> redisTemplate) {
try {
String groupName = FStreamName.fExecutionConsumerGroup;
String streamKey = FStreamName.fExecutionStream;
createConsumerGroup(streamKey, groupName, redisTemplate);
val listenerContainer = listenerContainer(redisTemplate, FTradeDataRedisEvent.class);
val subscription = listenerContainer.receiveAutoAck(
Consumer.from(groupName, FStreamName.fExecutionConsumer),
StreamOffset.create(streamKey, ReadOffset.lastConsumed()),
message -> {
log.info("[Subscription F_EXECUTION]: {}", message.getValue());
FTradeDataRedisEvent mTradeEvent = message.getValue();
try {
if (ExternalConfiguration.futureTradeDataSource.equals(ExecutionSource.ALL) || ExternalConfiguration.futureTradeDataSource.equals(mTradeEvent.getSource())) {
futureProductService.updateProductByDummy(mTradeEvent);
futureExecutionToTickHistoricalService.transform(mTradeEvent);
}
} catch (Exception ex) {
log.error("[fTradeEventResponse] error: {}", ex.getMessage());
}
redisTemplate.opsForStream().acknowledge(streamKey, groupName, message.getId());
redisTemplate.opsForStream().delete(streamKey, message.getId());
});
listenerContainer.start();
log.info("A stream key `{}` had been successfully set up in the group `{}`", streamKey, groupName);
return subscription;
} catch (Exception ex) {
log.error(ex.getMessage(), ex);
}
return null;
}
futureExecutionToTickHistoricalService.transform will send the data to web socket using SimpMessageSendingOperations
public void transform(FTradeDataRedisEvent tradeData) {
if (lastUpdateTickHistoricalDataTime == 0L) {
Calendar calendar = Calendar.getInstance();
this.lastUpdateTickHistoricalDataTime = calendar.getTimeInMillis();
}
List<FTickHistorical> res = separateFTickHistorical(tradeData);
res.forEach(tickHistorical -> {
List<KlineResponse> klineResponses = new ArrayList<>();
klineResponses.add(new KlineResponse(tickHistorical));
messageTemplate.convertAndSend(
PUBLIC_TOPIC_PREDIX + Constants.F_PRODUCTS_DESTINATION + "/" + tickHistorical.getProductId() + "/klines" + "_" + tickHistorical.getResolution().getCode(),
new HistoryResponse(klineResponses)
);
});
}
There are two problems with this setup, I have resolved one of them.
The Redis stream subscriber is started up before the connection to the external message broker is ready. Solved (listen to BrokerAvailabilityEvent and only then start Redis subscriptions)
When redeploy or shutdown application on IDE (like Intellij). The connection to the broker is again destroyed first (before the Redis stream subscribers), at the same time, there are still some data sending to the socket. This cause error: Message broker not active
I don't know how to configure the Spring boot application, so that when the application is stopped, it first stops consuming messages from Redis stream, process all of the pending messages, then close the broker connection.
This is the error log when the application is destroyed.
14.756 INFO 41184 --- [extShutdownHook] c.i.c.foapi.SpotStreamSubscriber : Broker not available
2022-10-09 14:14:14.757 INFO 41184 --- [extShutdownHook] c.i.c.f.config.RedisSubConfiguration : Broker not available
2022-10-09 14:14:14.781 ERROR 41184 --- [cTaskExecutor-1] c.i.c.foapi.consumer.SpotStreamConsumer : [executionResponse] error: Message broker not active. Consider subscribing to receive BrokerAvailabilityEvent's from an ApplicationListener Spring bean.; value of message: ExecutionRedisEvent(id=426665086, eventTime=2022-10-09T14:13:45.056809, productId=2, price=277.16, quantity=0.08, buyerId=2, sellerId=3, createdDate=2022-10-09T14:13:45.056844, takerSide=SELL, orderBuyId=815776680, orderSellId=815776680, symbol=bnbusdt, source=BINANCE)
2022-10-09 14:14:14.785 ERROR 41184 --- [cTaskExecutor-1] c.i.c.foapi.consumer.SpotStreamConsumer : [dummyOrderBookResponse] error: Message broker not active. Consider subscribing to receive BrokerAvailabilityEvent's from an ApplicationListener Spring bean.; value of message: DummyOrderBookEvent(productId=1, bids=[DummyOrderBook(price=1941.6, qty=0), DummyOrderBook(price=18827.3, qty=0.013), DummyOrderBook(price=18938.8, qty=5.004), DummyOrderBook(price=18940.3, qty=22.196), DummyOrderBook(price=18982.5, qty=20.99), DummyOrderBook(price=19027.2, qty=0.33), DummyOrderBook(price=19045.8, qty=8.432)
This is the code of SpotStreamSubscriber
#Component
#RequiredArgsConstructor
#Slf4j
public class SpotStreamSubscriber implements ApplicationListener<BrokerAvailabilityEvent> {
private final SpotStreamConsumer spotStreamConsumer;
#Override
public void onApplicationEvent(BrokerAvailabilityEvent event) {
if (event.isBrokerAvailable()) {
log.info("Broker ready");
spotStreamConsumer.subscribe();
} else {
log.info("Broker not available");
}
}
}
As you can see, the message broker is destroyed before the pending Redis messages have a chance to be processed.
Current architecture,
We use external message broker so that we can scale horizontally the api.
I'm testing kafka broker down scenarios in my service. Is there any way to execute piece of code once the service became connected again to the broker ?
For example:-
When the producer fails to send a message or throws an exception the service stores the message in a temp table to resend it once reconnecting to the broker.
#Transactional
#EventListener
public void detailsObjectEventListener(EntityObjectEvent<DetailsEntity> event) {
StreamPayload<Details> payload = new StreamPayload<>(event.getType(), new Details(event.getSource()));
MessageChannel channel = BootstrapApplication.getMessageChannel(DetailsChannel.class);
boolean sent;
try {
sent = channel.send(MessageBuilder.withPayload(payload).build());
} catch (Exception ex) {
sent = false;
}
if (!sent) {
entityManager.persist(new FailedStreamEventEntity(event));
}
}
To handle the problem i made a cron job method to republish failed events again when the broker is UP.
#Scheduled(initialDelay = 10000, cron = "0 * * * * ?")
public void reSendFailedEvents() {
if (kafkaIndicator.health().getStatus().equals(Status.UP)) {
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
Optional.of(FailedStreamEventEntity.class)
.map(builder::createQuery)
.map(query -> query.select(query.from(FailedStreamEventEntity.class)))
.map(entityManager::createQuery)
.map(TypedQuery::getResultList)
.map(List::stream)
.orElse(Stream.empty())
.peek(entityManager::remove)
.map(FailedStreamEventEntity::getEvent)
.forEach(BootstrapApplication.getEventPublisher()::publishEvent);
}
}
But, i believe its the worst solution to handle the problem. Also, i believe that there is a better way that can execute this code once the service reconnected to the broker. (Any help ?)
Working versions in the app
IBM AllClient version : 'com.ibm.mq:com.ibm.mq.allclient:9.1.1.0'
org.springframework:spring-jms : 4.3.9.RELEASE
javax.jms:javax.jms-api : 2.0.1
My requirement is that in case of the failure of a message processing due to say, consumer not being available (eg. DB is unavailable), the message remains in the queue or put back on the queue (if that is even possible). This is because the order of the messages is important, messages have to be consumed in the same order that they are received. The Java app is single-threaded.
I have tried the following
#Override
public void onMessage(Message message)
{
try{
if(message instanceOf Textmessage)
{
}
:
:
throw new Exception("Test");// Just to test the retry
}
catch(Exception ex)
{
try
{
int temp = message.getIntProperty("JMSXDeliveryCount");
throw new RuntimeException("Redlivery attempted ");
// At this point, I am expecting JMS to put the message back into the queue.
// But it is actually put into the Bakout queue.
}
catch(JMSException ef)
{
String temp = ef.getMessage();
}
}
}
I have set this in my spring.xml for the jmsContainer bean.
<property name="sessionTransacted" value="true" />
What is wrong with the code above ?
And if putting the message back in the queue is not practical, how can one browse the message, process it and, if successful, pull the message (so it is consumed and no longer on the queue) ? Is this scenario supported in IBM provider for JMS?
The IBM MQ Local queue has BOTHRESH(1).
To preserve message ordering, one approach might be to stop the message listener temporarily as part of your rollback strategy. Looking at the Spring Boot doc for DefaultMessageListenerContainer there is a stop(Runnable callback) method. I've experimented with using this in a rollback as follows.
To ensure my Listener is single threaded, on my DefaultJmsListenerContainerFactory I set containerFactory.setConcurrency("1").
In my Listener, I set an id
#JmsListener(destination = "DEV.QUEUE.2", containerFactory = "listenerTwoFactory", concurrency="1", id="listenerTwo")
And retrieve the DefaultMessageListenerContainer instance.
JmsListenerEndpointRegistry reg = context.getBean(JmsListenerEndpointRegistry.class);
DefaultMessageListenerContainer mlc = (DefaultMessageListenerContainer) reg.getListenerContainer("listenerTwo");
For testing, I check JMSXDeliveryCount and throw an exception to rollback.
retryCount = Integer.parseInt(msg.getStringProperty("JMSXDeliveryCount"));
if (retryCount < 5) {
throw new Exception("Rollback test "+retryCount);
}
In the Listener's catch processing, I call stop(Runnable callback) on the DefaultMessageListenerContainer instance and pass in a new class ContainerTimedRestart as defined below.
//catch processing here and decide to rollback
mlc.stop(new ContainerTimedRestart(mlc,delay));
System.out.println("#### "+getClass().getName()+" Unable to process message.");
throw new Exception();
ContainerTimedRestart extends Runnable and DefaultMessageListenerContainer is responsible for invoking the run() method when the stop call completes.
public class ContainerTimedRestart implements Runnable {
//Container instance to restart.
private DefaultMessageListenerContainer theMlc;
//Default delay before restart in mills.
private long theDelay = 5000L;
//Basic constructor for testing.
public ContainerTimedRestart(DefaultMessageListenerContainer mlc, long delay) {
theMlc = mlc;
theDelay = delay;
}
public void run(){
//Validate container instance.
try {
System.out.println("#### "+getClass().getName()+"Waiting for "+theDelay+" millis.");
Thread.sleep(theDelay);
System.out.println("#### "+getClass().getName()+"Restarting container.");
theMlc.start();
System.out.println("#### "+getClass().getName()+"Container started!");
} catch (InterruptedException ie) {
ie.printStackTrace();
//Further checks and ensure container is in correct state.
//Report errors.
}
}
I loaded my queue with three messages with payloads "a", "b", and "c" respectively and started the listener.
Checking DEV.QUEUE.2 on my queue manager I see IPPROCS(1) confirming only one application handle has the queue open. The messages are processed in order after each is rolled five times and with a 5 second delay between rollback attempts.
IBM MQ classes for JMS has poison message handling built in. This handling is based on the QLOCAL setting BOTHRESH, this stands for Backout Threshold. Each IBM MQ message has a "header" called the MQMD (MQ Message Descriptor). One of the fields in the MQMD is BackoutCount. The default value of BackoutCount on a new message is 0. Each time a message rolled back to the queue this count is incremented by 1. A rollback can be either from a specific call to rollback(), or due to the application being disconnected from MQ before commit() is called (due to a network issue for example or the application crashing).
Poison message handling is disabled if you set BOTHRESH(0).
If BOTHRESH is >= 1, then poison message handling is enabled and when IBM MQ classes for JMS reads a message from a queue it will check if the BackoutCount is >= to the BOTHRESH. If the message is eligible for poison message handling then it will be moved to the queue specified in the BOQNAME attribute, if this attribute is empty or the application does not have access to PUT to this queue for some reason, it will instead attempt to put the message to the queue specified in the queue managers DEADQ attribute, if it can't put to either of these locations it will be rolled back to the queue.
You can find more detailed information on IBM MQ classes for JMS poison message handling in the IBM MQ v9.1 Knowledge Center page Developing applications>Developing JMS and Java applications>Using IBM MQ classes for JMS>Writing IBM MQ classes for JMS applications>Handling poison messages in IBM MQ classes for JMS
In Spring JMS you can define your own container. One container is created for one Jms Destination. We should run a single-threaded JMS listener to maintain the message ordering, to make this work set the concurrency to 1.
We can design our container to return null once it encounters errors, post-failure all receive calls should return null so that no messages are polled from the destination till the destination is active once again. We can maintain an active state using a timestamp, that could be simple milliseconds. A sample JMS config should be sufficient to add backoff. You can add small sleep instead of continuously returning null from receiveMessage method, for example, sleep for 10 seconds before making the next call, this will save some CPU resources.
#Configuration
#EnableJms
public class JmsConfig {
#Bean
public JmsListenerContainerFactory<?> jmsContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory() {
#Override
protected DefaultMessageListenerContainer createContainerInstance() {
return new DefaultMessageListenerContainer() {
private long deactivatedTill = 0;
#Override
protected Message receiveMessage(MessageConsumer consumer) throws JMSException {
if (deactivatedTill < System.currentTimeMillis()) {
return receiveFromConsumer(consumer, getReceiveTimeout());
}
logger.info("Disabled due to failure :(");
return null;
}
#Override
protected void doInvokeListener(MessageListener listener, Message message)
throws JMSException {
try {
super.doInvokeListener(listener, message);
} catch (Exception e) {
handleException(message);
throw e;
}
}
private long getDelay(int retryCount) {
if (retryCount <= 1) {
return 20;
}
return (long) (20 * Math.pow(2, retryCount));
}
private void handleException(Message msg) throws JMSException {
if (msg.propertyExists("JMSXDeliveryCount")) {
int retryCount = msg.getIntProperty("JMSXDeliveryCount");
deactivatedTill = System.currentTimeMillis() + getDelay(retryCount);
}
}
#Override
protected void doInvokeListener(SessionAwareMessageListener listener, Session session,
Message message)
throws JMSException {
try {
super.doInvokeListener(listener, session, message);
} catch (Exception e) {
handleException(message);
throw e;
}
}
};
}
};
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}
}
I have a kubernetes cluster with an activeMQ Artemis Queue and I am using hpa for autoscaling of micro services. The messages are send via QpidSender and received via JMSListener.
Messaging works, but I am not able to configure the Queue/Listener in a way, that autoscaling works as expacted.
This is my Qpid sender
public static void send(String avroMessage, String task) throws JMSException, NamingException {
Connection connection = createConnection();
connection.start();
Session session = createSession(connection);
MessageProducer messageProducer = createProducer(session);
TextMessage message = session.createTextMessage(avroMessage);
message.setStringProperty("task", task);
messageProducer.send(
message,
DeliveryMode.NON_PERSISTENT,
Message.DEFAULT_PRIORITY,
Message.DEFAULT_TIME_TO_LIVE);
connection.close();
}
private static MessageProducer createProducer(Session session) throws JMSException {
Destination producerDestination =
session.createQueue("queue?consumer.prefetchSize=1&heartbeat='10000'");
return session.createProducer(producerDestination);
}
private static Session createSession(Connection connection) throws JMSException {
return connection.createSession(Session.AUTO_ACKNOWLEDGE);
}
private static Connection createConnection() throws NamingException, JMSException {
Hashtable<Object, Object> env = new Hashtable<>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.jms.jndi.JmsInitialContextFactory");
env.put("connectionfactory.factoryLookup", amqUrl);
Context context = new javax.naming.InitialContext(env);
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("factoryLookup");
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setConnectionFactory(connectionFactory);
pooledConnectionFactory.setMaxConnections(10);
return connectionFactory.createConnection(amqUsername, amqPassword);
}
This is my Listener config
#Bean
public JmsConnectionFactory jmsConnection() {
JmsConnectionFactory jmsConnection = new JmsConnectionFactory();
jmsConnection.setRemoteURI(this.amqUrl);
jmsConnection.setUsername(this.amqUsername);
jmsConnection.setPassword(this.amqPassword);
return jmsConnection;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConnection());
return factory;
}
And here is my Listener
#JmsListener(
destination = "queue?consumer.prefetchSize=1&heartbeat='10000'",
selector = "task = 'myTask'"
)
public void receiveMsg(Message message) throws IOException, JMSException {
message.acknowledge();
doStuff();
}
I send the message like this
QpidSender.send(avroMessage, "myTask");
This setting works. I can send different messages and as soon than there are more then 2, the second instance of my service starts and consumes the message. If later the message count is below 2, the service is terminated.
The problem is: I don't want the message to be acknowledged before the doStuff(). Because if something goes wrong or if the service is terminated before finishing doStuff(), the message is lost (right?).
But if I reorder it to
doStuff();
message.acknowledge();
the second instance can not receive a message from the broker, as long as the first service is still in doStuff() and hasn't acknowledged the message.
How do I configure this in a way, that more than one instance can consume a message from the queue, but the message isn't lost, if the service gets terminated or something else fails on doStuff()?
Use factory.setSessionTransacted(true).
See the javadocs for DefaultMessageListenerContainer:
* <p><b>It is strongly recommended to either set {#link #setSessionTransacted
* "sessionTransacted"} to "true" or specify an external {#link #setTransactionManager
* "transactionManager"}.</b> See the {#link AbstractMessageListenerContainer}
* javadoc for details on acknowledge modes and native transaction options, as
* well as the {#link AbstractPollingMessageListenerContainer} javadoc for details
* on configuring an external transaction manager. Note that for the default
* "AUTO_ACKNOWLEDGE" mode, this container applies automatic message acknowledgment
* before listener execution, with no redelivery in case of an exception.