javax.jms.JMSException: Duplicate durable subscription detected - spring-boot

I have a JMS application deployed as a Docker image in AWS Fargate. Two services are running for the task. However the problem is I am getting this:
2021-03-24 05:15:43.022 ERROR 1 --- [ main] com.hp.ext.cpq.pubsub.SnsTopicPublisher : Exception happened in readJmsTopicPublishToSnsTopic --->javax.jms.JMSException: Duplicate durable subscription detected
This is the code I am using to create the durable subscriber:
SnsTopicPublisher asyncSubscriber = this.ctx.getBean(SnsTopicPublisher.class);
if (prop.getProperty("tibco.msgSourceType").equalsIgnoreCase("TOPIC")) {
dest_t = session.createTopic(prop.getProperty("tibco.msgSource"));
**TopicSubscriber topicSubscriber = session.createDurableSubscriber(dest_t, "pfpDurable");**
topicSubscriber.setMessageListener(asyncSubscriber);
logger.debug("Set Jms Topic Listener ---> asyncSubscriber");
}
if (prop.getProperty("tibco.msgSourceType").equalsIgnoreCase("QUEUE")) {
dest_q = session.createQueue(prop.getProperty("tibco.msgSource"));
MessageConsumer msgConsumer_p = session.createConsumer(dest_q);
msgConsumer_p.setMessageListener(asyncSubscriber);
logger.debug("Set Jms Queue Listener ---> asyncSubscriber");
}
I am getting the error for the marked line from AWS cloud watch logs.

Most likely you have a connection (and generally other JMS objects) leak. When the exception is thrown, you need to close resources in a finally {} block, similar to the JDBC pattern.
Also, you might want to look at using a Pooled Connection. This allows for open+close pattern on JMS connections without really closing connections to the server. Check out the activemq-jms-pool which is a JMS-standard pool (not ActiveMQ specific) that works with most JMS brokers, including Tibco and IBM MQ.
Connection connection = null;
Session session = null;
MessageConsumer messageConsumer = null;
try {
connection = connectionFactory.createConnection();
connection.start();
.. do some JMS
} catch (JMSException e) {
// handle errors
} finally {
if (messageConsumer != null) {
try { messageConsumer.close(); } catch (JMSException e) { logger.error("Error closing MessagingConsumer", e);
}
if (session != null) {
try { session.close(); } catch (JMSException e) { logger.error("Error closing Session", e);
}
if (connection != null) {
try { connection.close(); } catch (JMSException e) { logger.error("Error closing Connection", e);
}
}
Note: The JMS specification only allows 1 durable subscription per topic using the API you have in the code. The JMS v2.0 allows for a Shared Durable Subscription to support multiple consumers.

Related

IBM-MQ (ibm.mq.allclient:jar:9.2.2.0) Spring Boot dynamic listener fails with too many MQ connections with compcode '2' ('MQCC_FAILED') reason 2537

Error : MQ call failed with compcode '2' ('MQCC_FAILED') reason '2537' ('MQRC_CHANNEL_NOT_AVAILABLE')
Start-up of the application we fetch the queue name dynamically and create a JMS listener programmatically. Messages are processed within the transaction and if the transaction fails it rolls back to the IBM-MQ queue
public DefaultJmsListenerContainerFactory mqJmsListenerContainerFactory() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(ibmConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setSessionTransacted(true);
factory.setConcurrency(requestConcurrency);
return factory;
}
for (IBMmqQueue aTradeQueue : requestMQQueueNames) {//we have dynamic queue names
if(StringUtils.isEmpty(aTradeQueue.getTradeRequestName())){
continue;
}
SimpleJmsListenerEndpoint endpoint = new SimpleJmsListenerEndpoint();
endpoint.setId(aTradeQueue.getTradeRequestName());
endpoint.setDestination(aTradeQueue.getTradeRequestName());
endpoint.setMessageListener(message -> {
TransactionStatus status = jmsTransactionManager.getTransaction(null);
try {
// This starts a new transaction scope. "null" can be used to get a default transaction model
ibmmqConsumer.tradeListener(message, aTradeQueue.getTradeRequestName());
jmsTransactionManager.commit(status);
} catch (Exception e) {
jmsTransactionManager.rollback(status);
try {
LOG.error("Failed to publish to Active-MQ & rolled back to IBM-MQ : " + message.getJMSMessageID());
} catch (JMSException ex) {
ex.printStackTrace();
}
}
});
try {
registrar.setContainerFactory(this.mqJmsListenerContainerFactory());
} catch (JMSException e) {
e.printStackTrace();
}
registrar.registerEndpoint(endpoint);
}

Replay/synchronous messages memory not released for messages sent by producers to queue in SpringBoot JMS with ActiveMQ

1. Context:
A two-modules/microservice application developed with SpringBoot 2.3.0 and ActiveMQ.
Also we use ActiveMQ 5.15.13 server/broker.
Broker is defined in both modules with application properties.
Also broker connection pool is defined in both modules as well with application properties and added in both modules the pooled-jms artifact dependency (with maven):
spring.activemq.broker-url=xxx
spring.activemq.user=xxx
spring.activemq.password=xx
spring.activemq.non-blocking-redelivery=true
spring.activemq.pool.enabled=true
spring.activemq.pool.time-between-expiration-check=5s
spring.activemq.pool.max-connections=10
spring.activemq.pool.max-sessions-per-connection=10
spring.activemq.pool.idle-timeout=60s
Other configurations for JMS I done are:
spring.jms.listener.acknowledge-mode=auto
spring.jms.listener.auto-startup=true
spring.jms.listener.concurrency=5
spring.jms.listener.max-concurrency=10
spring.jms.pub-sub-domain=false
spring.jms.template.priority=100
spring.jms.template.qos-enabled=true
spring.jms.template.delivery-mode=persistent
In module 1 the JmsTemplate is used to send synchronous messages (or we can name replay-messages as well). I've opted out for a proper queue instead of a temporary queue as I understand that if there are lots of messages sent than a temporary queue is not recommended to be used for replays - so that's what I did.
2. Code samples:
MODULE 1:
#Value("${app.request-video.jms.queue.name}")
private String requestVideoQueueNameAppProperty;
#Bean
public Queue requestVideoJmsQueue() {
logger.info("Initializing requestVideoJmsQueue using application property value for " +
"app.request-video.jms.queue.name=" + requestVideoQueueNameAppProperty);
return new ActiveMQQueue(requestVideoQueueNameAppProperty);
}
#Value("${app.request-video-replay.jms.queue.name}")
private String requestVideoReplayQueueNameAppProperty;
#Bean
public Queue requestVideoReplayJmsQueue() {
logger.info("Initializing requestVideoReplayJmsQueue using application property value for " +
"app.request-video-replay.jms.queue.name=" + requestVideoReplayQueueNameAppProperty);
return new ActiveMQQueue(requestVideoReplayQueueNameAppProperty);
}
#Autowired
private JmsTemplate jmsTemplate;
public Message callSendAndReceive(TextJMSMessageDTO messageDTO, Destination jmsDestination, Destination jmsReplay) {
return jmsTemplate.sendAndReceive(jmsDestination, jmsSession -> {
try {
TextMessage textMessage = jmsSession.createTextMessage();
textMessage.setText(messageDTO.getText());
textMessage.setJMSReplyTo(jmsReplay);
textMessage.setJMSCorrelationID(UUID.randomUUID().toString());
textMessage.setJMSDeliveryMode(DeliveryMode.NON_PERSISTENT);
return textMessage;
} catch (IOException e) {
logger.error("Error sending JMS message to destination: " + jmsDestination, e);
throw new JMSException("Error sending JMS message to destination: " + jmsDestination);
}
});
}
MODULE 2:
#JmsListener(destination = "${app.backend-get-request-video.jms.queue.name}")
public void onBackendGetRequestsVideoMessage(TextMessage message, Session session) throws JMSException, IOException {
logger.info("Get requests video file message consumed!");
try {
Object replayObject = handleReplayAction(message);
JMSMessageDTO messageDTO = messageDTOFactory.getJMSMessageDTO(replayObject);
Message replayMessage = messageFactory.getJMSMessage(messageDTO, session);
BytesMessage replayBytesMessage = jmsSession.createBytesMessage();
fillByteMessageFromMediaDTO(replayBytesMessage, mediaMessageDTO);
replayBytesMessage.setJMSCorrelationID(message.getJMSCorrelationID());
final MessageProducer producer = session.createProducer(message.getJMSReplyTo());
producer.send(replayBytesMessage);
JmsUtils.closeMessageProducer(producer);
} catch (JMSException | IOException e) {
logger.error("onBackendGetRequestsVideoMessage()JMSException: " + e.getMessage(), e);
throw e;
}
}
private void fillByteMessageFromMediaDTO(BytesMessage bytesMessage, MediaJMSMessageDTO mediaMessageDTO)
throws IOException, JMSException {
String filePath = fileStorageConfiguration.getMediaFilePath(mediaMessageDTO);
FileInputStream fileInputStream = null;
try (FileInputStream fileInputStream = new FileInputStream(filePath)) {
byte[] byteBuffer = new byte[1024];
int bytes_read = 0;
while ((bytes_read = fileInputStream.read(byteBuffer)) != -1) {
bytesMessage.writeBytes(byteBuffer, 0, bytes_read);
}
} catch (JMSException e) {
logger.error("Can not write data in JMS ByteMessage from file: " + fileName, e);
} catch (FileNotFoundException e) {
logger.error("Can not open stream to file: " + fileName, e);
} catch (IOException e) {
logger.error("Can not read data from file: " + fileName, e);
}
}
3. The problem:
As I send many messages and receive many corresponding replays through producer/comsumer/JmsTamplate both application modules 1 and 2 are fast-filling the heap memory allocated until an out-of-memory error is thrown, but the memory leak appears only when using synchronous messages with replay as shown above.
I've debugged my code and all instances (session, producers, consumers, jmsTamplate, etc) are pooled and have instances of the right classes from pooled-jms library; so pool should - apparently - work properly.
I've made a heap dump of the second module and looks like producers messages (ActiveMQBytesMessage) are still in memory even long time after have been successfully consumed by the right consumer.
I have asynchronous messages sent as well in my modules and seams that those messages producer-consumer works well; the problem is present only for the synch/replay messages producer-consumer.
Sample heap dump files - taken after full night of application inactivity - as following:
module 1
module_1_dump
module 2
module_2_dump
activemq broker/server
activemq_dump
Anyone have any idea what I'm doing wrong?!

JMS queue static snapshot

I need to browse a JMS queue and filter it based on how many message of particular criteria exists.
But the problem is in JBoss EAP, while browsing the queue, if new messages comes it is also considered in browse which make the process to run so long because this application is continuously getting a lot of messages.
Basically need to understand whether I can get static snapshot of the queue so that I can scan through message without considering the new & upcoming messages.
PS: This was working fine in Weblogic server.
Here's the browser code:
Context namingContext = null;
try {
String userName = System.getProperty("username", DEFAULT_USERNAME);
String password = System.getProperty("password", DEFAULT_PASSWORD);
// Set up the namingContext for the JNDI lookup
final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY);
env.put(Context.PROVIDER_URL, System.getProperty(Context.PROVIDER_URL, PROVIDER_URL));
env.put(Context.SECURITY_PRINCIPAL, userName);
env.put(Context.SECURITY_CREDENTIALS, password);
namingContext = new InitialContext(env);
// Perform the JNDI lookups
String connectionFactoryString = System.getProperty("connection.factory", DEFAULT_CONNECTION_FACTORY);
ConnectionFactory connectionFactory = (ConnectionFactory) namingContext.lookup(connectionFactoryString);
try (JMSContext context = connectionFactory.createContext(userName, password)) {
Queue queue = (Queue) namingContext.lookup("jms/ubsexecute");
QueueBrowser browser = context.createBrowser(queue);
Enumeration enumeration = browser.getEnumeration();
int i =1;
while (enumeration.hasMoreElements()) {
Object nextElement = enumeration.nextElement();
System.out.println("Read a message " + i++);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
} catch (NamingException e) {
log.severe(e.getMessage());
e.printStackTrace();
} finally {
if (namingContext != null) {
try {
namingContext.close();
} catch (NamingException e) {
log.severe(e.getMessage());
}
}
}
} catch (Exception e) {
// TODO: handle exception
}
As noted in the JavaDoc for javax.jms.QueueBrowser:
Messages may be arriving and expiring while the scan is done. The JMS API does not require the content of an enumeration to be a static snapshot of queue content. Whether these changes are visible or not depends on the JMS provider.
The content of the enumeration provided by the queue browser from the JMS provider in JBoss EAP is not static and there is no way to force it to be.
Since the behavior your looking for is not guaranteed by JMS I recommend you adjust your application so that it doesn't rely upon such behavior.
A couple of alternatives come to mind:
Set an upper limit on how many messages the browser will inspect.
Use a provider-specific management call to get the number of messages in the queue before the browser is created and then only browse through that number of messages.

Spring JMS Template Sync Receive on Non-Durable Subscriber Message Loss

1. Background
We are evaluating Spring JMS and testing out the JMSTemplate for various scenarios - queues, topics (durable, non-durable).
We experienced message loss for non-durable topic subscribers and would like to seek clarifications here.
2. Problem Statement
a) We wrote a standalone java program that would call the JMSTemplate.receive method every n secs to receive messages synchronously from a non-durable topic**.
b) We noticed that there is always message loss after the 1st invocation of the JMSTemplate.receive method. This was due to the JMSTemplate.receive method stopping the connection when it reaches ConnectionFactoryUtils.releaseConnection(...).
JMSTemplate:
public <T> T execute(SessionCallback<T> action, boolean startConnection) throws JmsException
{
Assert.notNull(action, "Callback object must not be null");
Connection conToClose = null;
Session sessionToClose = null;
try {
Session sessionToUse = ConnectionFactoryUtils.doGetTransactionalSession(
getConnectionFactory(), this.transactionalResourceFactory, startConnection);
if (sessionToUse == null) {
conToClose = createConnection();
sessionToClose = createSession(conToClose);
if (startConnection) {
conToClose.start();
}
sessionToUse = sessionToClose;
}
if (logger.isDebugEnabled()) {
logger.debug("Executing callback on JMS Session: " + sessionToUse);
}
return action.doInJms(sessionToUse);
}
catch (JMSException ex) {
throw convertJmsAccessException(ex);
}
finally {
JmsUtils.closeSession(sessionToClose);
ConnectionFactoryUtils.releaseConnection(conToClose, getConnectionFactory(), startConnection); // the connection is stopped here
}
}
ConnectionFactoryUtils.releaseConnection(...):
public static void releaseConnection(Connection con, ConnectionFactory cf, boolean started) {
if (con == null) {
return;
}
if (started && cf instanceof SmartConnectionFactory && ((SmartConnectionFactory) cf).shouldStop(con)) {
try {
con.stop(); // connection was stopped here
}
catch (Throwable ex) {
logger.debug("Could not stop JMS Connection before closing it", ex);
}
}
try {
con.close();
}
catch (Throwable ex) {
logger.debug("Could not close JMS Connection", ex);
}
3. Validation with Spring Documentation
The Spring JMS documentation advised to use pooled connections, so we made sure we did.
Our java program is obtaining the JMS Connection Factories from WLS JMS and MQ JMS (LDAP) Providers and decorated with SingleConnectionFactory and CachingConnectionFactory in respective test cases.
This is what we observed during testing:
a) SingleConnectionFactory - Connection was stopped (Consumer/Session were closed as well).
b) CachingConnectionFactory - Connection was also stopped (although Consumer/Session were cached and not closed)
4. Questions:
a) Has anybody hit the same issue as us?
b) Would you consider this as a defect of Spring JMS for the use case of Non-Durable Subscriptions?
c) We are considering customizing a CachingConnectionFactory that won't stop the connection. Any downsides?
Note: We are aware that Async MessageListeners like DMLC/SMLC and Sync Durable Topic Subscribers using JMSTemplate would not have this issue. We just wish to clarify for Sync Non-Durable Topic Subscribers using JMSTemplate.
Would greatly appreciate any comments and thoughts.
Thanks!
Victor

Retry to establish a JMS connection while ActiveMQ broker is not available

Here is my scenario. I have few ActiveMQ (JBoss-AMQ) producers and consumers installed as services. In a server restart, what is the best practice of handling such a situation where a producer or a consumer service starts before the ActiveMQ broker service. In that case producer/client cannot establish a connection and starts to hang on as it is even after the broker service starts.
here's my code snippet of connection creation:
try {
connection = connectionFactory.createConnection();
connection.start();
LOGGER.info(STARTED_CONNECTION_WITH_THE_DESTINATION + destinationName);
session = createSession();
destination = session.createQueue(destinationName);
LOGGER.info(CREATED_QUEUE_IN_DESTINATION + destinationName);
if (isImageProcAgent) {
consumer = createConsumer();
LOGGER.info(CONSUMER_HAS_BEEN_INITIALIZED);
} else {
producer = session.createProducer(destination);
LOGGER.info(PRODUCER_HAS_BEEN_INITIALIZE);
}
} catch (MessagingException e) {
LOGGER.error(e);
} catch (JMSException e) {
LOGGER.error(e);
}
I'm new to JMS so appreciate your support.
This can be achieved by configuring a failover as this document explains.
according to my code snippet, the change I required it:
destination = session.createQueue("failover:"+destinationName);
producer = session.createProducer("failover:"+destination);

Resources