I have a test application (Spring Boot 2.7.8) that uses ActiveMQ Artemis 2.27.1 as a messaging system. I have a 6 node cluster split into 3 live/backup pairs. Load balanced using ON_DEMAND with a redistribution delay of 2000.
The application creates a connection factory specifying all 3 live nodes and creates a withHA connection factory.
I have a generator class that publishes messages to a single queue. There is a consumer of that queue which replicates this message to 3 different queues. I am aware of topics and wish to move there eventually, but I am modeling an existing solution which does this sort of thing now.
Testing shows that I publish a message and consume it, it publishes to the other 3 queues but only consumes from 2 of them despite all having listeners. Checking the queues after execution shows that it has sent messages to the queue. This is consistent over several runs, the same queue is never consumed whilst I am generating 'new' events.
If I disable the initial generation of new messages and just rerun, the missing queue is then drained by its listener.
This feels like when the connections are made, this queue has a publisher on one node and the consumer on another and the redistribution is not happening. Not sure how I can prove this or why, if the publishing node does not have consumers, it is not redistributing to the consumer.
Connection factory bean
#Bean
public ActiveMQConnectionFactory jmsConnectionFactory() throws Exception {
HashMap<String, Object> map1 = new HashMap<>();
map1.put("host", "192.168.0.10");
map1.put("port", "61616");
HashMap<String, Object> map2 = new HashMap<>();
map2.put("host", "192.168.0.11");
map2.put("port", "61617");
HashMap<String, Object> map3 = new HashMap<>();
map3.put(TransportConstants.HOST_PROP_NAME, "192.168.0.12");
map3.put(TransportConstants.PORT_PROP_NAME, "61618");
TransportConfiguration server1 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map1);
TransportConfiguration server2 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map2);
TransportConfiguration server3 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map3);
ActiveMQConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1, server2, server3);
ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.QUEUE_CF, server1);
connectionFactory.setPassword(brokerPassword);
connectionFactory.setUser(brokerUsername);
return connectionFactory;
}
Listener factory bean
#Bean
public DefaultJmsListenerContainerFactory jmsQueueListenerContainerFactory() throws Exception {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConnectionFactory());
//factory.setConcurrency("4-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setSessionTransacted(true);
return factory;
}
This handler listens to the initial published queue and splits
#Slf4j
#Component
#RequiredArgsConstructor
public class TransactionManagerListener {
private final JmsTemplate jmsTemplate;
/**
*
* Handle the ItemStatsUpdate event
*
* #param data - Event details wrapper object
* #throws RuntimeException that triggers a retry for that item following the backoff rules in the retryable
*/
#JmsListener(destination = "NewItem", containerFactory = "jmsQueueListenerContainerFactory")
public void podA(Session session, Message message, String data) throws RuntimeException {
log.info("TML {}!", data);
sendItemOn(data);
}
private void sendItemOn(String data) {
jmsTemplate.convertAndSend("Stash", data);
jmsTemplate.convertAndSend("PCE", data);
jmsTemplate.convertAndSend("ACD", data);
}
}
Extract from broker.xml. All nodes are slightly different to hook up the differnt live servers and their backup
<connectors>
<connector name="live1-connector">tcp://192.168.0.10:61616</connector>
<connector name="live2-connector">tcp://192.168.0.11:61617</connector>
<connector name="live3-connector">tcp://192.168.0.12:61618</connector>
<connector name="back1-connector">tcp://192.168.0.13:61619</connector>
<connector name="back2-connector">tcp://192.168.0.10:61620</connector>
<connector name="back3-connector">tcp://192.168.0.11:61621</connector>
</connectors>
<cluster-user>my-cluster-user</cluster-user>
<cluster-password>my-cluster-password</cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>live2-connector</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<static-connectors>
<connector-ref>live1-connector</connector-ref>
<connector-ref>live3-connector</connector-ref>
<connector-ref>back2-connector</connector-ref>
<!--
<connector-ref>back1-connector</connector-ref>
<connector-ref>back3-connector</connector-ref>
-->
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<group-name>gloucester</group-name>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
As you can see form the commented out concurrency setting I have tried to tweak the threads and consumers available in the listener factory but it made no difference.
This came down to network configuration. It is necessary to run the cluster on the local swarm network but provide ingress connectors for the JMS client.
<connectors>
<connector name="live1-connector">tcp://192.168.0.10:61616</connector>
<connector name="live2-connector">tcp://192.168.0.11:61617</connector>
<connector name="live3-connector">tcp://192.168.0.12:61618</connector>
<connector name="back1-connector">tcp://192.168.0.13:61619</connector>
<connector name="back2-connector">tcp://192.168.0.10:61620</connector>
<connector name="back3-connector">tcp://192.168.0.11:61621</connector>
<connector name="live1-connector">tcp://live1:61616</connector>
<connector name="live2-connector">tcp://live2:61616</connector>
<connector name="live3-connector">tcp://live3:61616</connector>
<connector name="back1-connector">tcp://back1:61616</connector>
<connector name="back2-connector">tcp://back2:61616</connector>
<connector name="back3-connector">tcp://back3:61616</connector>
</connectors>
<cluster-user>my-cluster-user</cluster-user>
<cluster-password>my-cluster-password</cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>live3-connector</connector-ref>
<static-connectors>
<connector-ref>back3-connector</connector-ref>
<connector-ref>live1-connector</connector-ref>
<connector-ref>back1-connector</connector-ref>
<connector-ref>live2-connector</connector-ref>
<connector-ref>back2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Related
I'm working with ActiveMQ Artemis 2.17 and Spring Boot 2.5.7.
I'm publishing messages on topic and queue and consuming it. All is done through JMS.
All queues (anycast or multicast) are durables. My topic (multicast address) has 2 durable queues in order to have 2 separate consumers.
For my topic, the 2 consumers use durable and shared subscriptions (JMS 2.0).
All processing is transactional, managed through Atomikos transaction manager (I need it for a 2 phases commit with the database).
I have a problem with the redelivery policy and DLQ. When I throw an exception during the processing of the message, the redelivery policy applies correctly for the queue (anycast queue) and a DLQ is created with the message. However, for the topic (multicast queue), the redelivery policy does not apply and the message is not sent into a DLQ.
Here is my ActiveMQ Artemis broker configuration:
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<!-- Codec and key used to encode the passwords -->
<!-- TODO : set master-password configuration into the Java code -->
<password-codec>org.apache.activemq.artemis.utils.DefaultSensitiveStringCodec;key=UBNTd0dS9w6f8HDIyGW9
</password-codec>
<!-- Configure the persistence into a database (postgresql) -->
<persistence-enabled>true</persistence-enabled>
<store>
<database-store>
<bindings-table-name>BINDINGS_TABLE</bindings-table-name>
<message-table-name>MESSAGE_TABLE</message-table-name>
<page-store-table-name>PAGE_TABLE</page-store-table-name>
<large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name>
<node-manager-store-table-name>NODE_MANAGER_TABLE</node-manager-store-table-name>
<jdbc-lock-renew-period>2000</jdbc-lock-renew-period>
<jdbc-lock-expiration>20000</jdbc-lock-expiration>
<jdbc-journal-sync-period>5</jdbc-journal-sync-period>
<!-- Configure a connection pool -->
<data-source-properties>
<data-source-property key="driverClassName" value="org.postgresql.Driver"/>
<data-source-property key="url" value="jdbc:postgresql://localhost/artemis"/>
<data-source-property key="username" value="postgres"/>
<data-source-property key="password" value="ENC(-3eddbe9664c85ec7ed63588b000486cb)"/>
<data-source-property key="poolPreparedStatements" value="true"/>
<data-source-property key="initialSize" value="2"/>
<data-source-property key="minIdle" value="1"/>
</data-source-properties>
</database-store>
</store>
<!-- Configure the addresses, queues and topics default behaviour -->
<!-- See: https://activemq.apache.org/components/artemis/documentation/latest/address-model.html -->
<address-settings>
<address-setting match="#">
<dead-letter-address>DLA</dead-letter-address>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix/>
<dead-letter-queue-suffix>.DLQ</dead-letter-queue-suffix>
<expiry-address>ExpiryQueue</expiry-address>
<auto-create-expiry-resources>false</auto-create-expiry-resources>
<expiry-queue-prefix/>
<expiry-queue-suffix>.EXP</expiry-queue-suffix>
<expiry-delay>-1</expiry-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<redelivery-delay>250</redelivery-delay>
<redelivery-delay-multiplier>2.0</redelivery-delay-multiplier>
<redelivery-collision-avoidance-factor>0.5</redelivery-collision-avoidance-factor>
<max-redelivery-delay>10000</max-redelivery-delay>
<max-size-bytes>100000</max-size-bytes>
<page-size-bytes>20000</page-size-bytes>
<page-max-cache-size>5</page-max-cache-size>
<max-size-bytes-reject-threshold>-1</max-size-bytes-reject-threshold>
<address-full-policy>PAGE</address-full-policy>
<message-counter-history-day-limit>0</message-counter-history-day-limit>
<default-last-value-queue>false</default-last-value-queue>
<default-non-destructive>false</default-non-destructive>
<default-exclusive-queue>false</default-exclusive-queue>
<default-consumers-before-dispatch>0</default-consumers-before-dispatch>
<default-delay-before-dispatch>-1</default-delay-before-dispatch>
<redistribution-delay>0</redistribution-delay>
<send-to-dla-on-no-route>true</send-to-dla-on-no-route>
<slow-consumer-threshold>-1</slow-consumer-threshold>
<slow-consumer-policy>NOTIFY</slow-consumer-policy>
<slow-consumer-check-period>5</slow-consumer-check-period>
<!-- We disable the automatic creation of queue or topic -->
<auto-create-queues>false</auto-create-queues>
<auto-delete-queues>true</auto-delete-queues>
<auto-delete-created-queues>false</auto-delete-created-queues>
<auto-delete-queues-delay>30000</auto-delete-queues-delay>
<auto-delete-queues-message-count>0</auto-delete-queues-message-count>
<config-delete-queues>OFF</config-delete-queues>
<!-- We disable the automatic creation of address -->
<auto-create-addresses>false</auto-create-addresses>
<auto-delete-addresses>true</auto-delete-addresses>
<auto-delete-addresses-delay>30000</auto-delete-addresses-delay>
<config-delete-addresses>OFF</config-delete-addresses>
<management-browse-page-size>200</management-browse-page-size>
<default-purge-on-no-consumers>false</default-purge-on-no-consumers>
<default-max-consumers>-1</default-max-consumers>
<default-queue-routing-type>ANYCAST</default-queue-routing-type>
<default-address-routing-type>ANYCAST</default-address-routing-type>
<default-ring-size>-1</default-ring-size>
<retroactive-message-count>0</retroactive-message-count>
<enable-metrics>true</enable-metrics>
<!-- We automatically force group rebalance and a dispatch pause during group rebalance -->
<default-group-rebalance>true</default-group-rebalance>
<default-group-rebalance-pause-dispatch>true</default-group-rebalance-pause-dispatch>
<default-group-buckets>1024</default-group-buckets>
</address-setting>
</address-settings>
<!-- Define the protocols accepted -->
<!-- See: https://activemq.apache.org/components/artemis/documentation/latest/protocols-interoperability.html -->
<acceptors>
<!-- Acceptor for only CORE protocol -->
<!-- We enable the cache destination as recommended into the documentation. See: https://activemq.apache.org/components/artemis/documentation/latest/using-jms.html -->
<acceptor name="artemis">
tcp://0.0.0.0:61616?protocols=CORE,tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;useEpoll=true,cacheDestinations=true
</acceptor>
</acceptors>
<!-- Define how to connect to another broker -->
<!-- TODO : est-ce utile ? -->
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
<connector name="broker1-connector">tcp://localhost:61616</connector>
<connector name="broker2-connector">tcp://localhost:61617</connector>
</connectors>
<!-- Configure the High-Availability and broker cluster for the high-availability -->
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<cluster-connections>
<cluster-connection name="gerico-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker1-connector</connector-ref>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<!-- <cluster-user>cluster_user</cluster-user>-->
<!-- <cluster-password>cluster_user_password</cluster-password>-->
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>72000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<!-- Security configuration -->
<security-enabled>false</security-enabled>
<!-- Addresses and queues configuration -->
<!-- !!! DON'T FORGET TO UPDATE 'slave-broker.xml' FILE !!! -->
<addresses>
<address name="topic.test_rde">
<multicast>
<queue name="rde_receiver_1">
<durable>true</durable>
</queue>
<queue name="rde_receiver_2">
<durable>true</durable>
</queue>
</multicast>
</address>
<address name="queue.test_rde">
<anycast>
<queue name="queue.test_rde">
<durable>true</durable>
</queue>
</anycast>
</address>
</addresses>
</core>
</configuration>
The JMS configuration into the Spring Boot is the following:
#Bean
public DynamicDestinationResolver destinationResolver() {
return new DynamicDestinationResolver() {
#Override
public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException {
if (destinationName.startsWith("topic.")) {
pubSubDomain = true;
}
else
pubSubDomain =false;
return super.resolveDestinationName(session, destinationName, pubSubDomain);
}
};
}
#Bean
public JmsListenerContainerFactory<?> queueConnectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer,
JmsErrorHandler jmsErrorHandler) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(false);
factory.setSessionTransacted(true);
factory.setErrorHandler(jmsErrorHandler);
return factory;
}
#Bean
public JmsListenerContainerFactory<?> topicConnectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer,
JmsErrorHandler jmsErrorHandler) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
factory.setSessionTransacted(true);
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true);
factory.setErrorHandler(jmsErrorHandler);
return factory;
}
The messages publisher:
#GetMapping("api/send")
public void sendDataToJms() throws InterruptedException {
OrderRest currentService = this.applicationContext.getBean(OrderRest.class);
for (Long i = 0L; i < 1L; i++) {
currentService.sendTopicMessage(i);
currentService.sendQueueMessage(i);
Thread.sleep(200L);
}
}
#Transactional
public void sendTopicMessage(Long id) {
Order myMessage = new Order("--" + id.toString() + "--", new Date());
jmsTemplate.convertAndSend("topic.test_rde", myMessage);
}
#Transactional
public void sendQueueMessage(Long id) {
Order myMessage = new Order("--" + id.toString() + "--", new Date());
jmsTemplate.convertAndSend("queue.test_rde", myMessage);
}
And the listeners:
#Transactional
#JmsListener(destination = "topic.test_rde", containerFactory = "topicConnectionFactory", subscription = "rde_receiver_1")
public void receiveMessage_rde_1(#Payload Order order, #Headers MessageHeaders headers, Message message, Session session) {
log.info("---> Consumer1 - rde_receiver_1 - " + order.getContent());
throw new ValidationException("Voluntary exception", "entity", List.of(), List.of());
}
#Transactional
#JmsListener(destination = "queue.test_rde", containerFactory = "queueConnectionFactory")
public void receiveMessage_rde_queue(#Payload Order order, #Headers MessageHeaders headers, Message message, Session session) {
log.info("---> Consumer1 - rde_receiver_queue - " + order.getContent());
throw new ValidationException("Voluntary exception", "entity", List.of(), List.of());
}
Is it normal that the redelivery policy and the DLQ mechanismd do only apply for a queue (anycat queue)?
Is it possible to also apply it for topic (multicast queue) and shared durable subscriptions?
If not, how can I have a topic behaviour but with redelivery and DLQ mechanisms? Should I use the divert solution of ActiveMQ?
Thank a lot for your help.
Regards.
I have found the problem. It comes from Atomikos that does not support the JMS v2.0 but only JMS 1.1.
So, it's not possible to get a shared durable subscription on a topic that support at the same time the 2-PC and the redelivery policy.
I have a Redis database with an RLEC (RedisLabs Enterprise Cluster) UI which has been set up for SSL connections.
I have a java app which is able to connect to the redis database using Jedis.
This works:
Jedis jedis = new Jedis(redisInfo.getHost(), redisInfo.getPort(), useSsl);
// make the connection
jedis.connect();
// authorize with our password
jedis.auth(redisInfo.getPassword());
Env vars:
"-Djavax.net.ssl.keyStoreType=PKCS12 -Djavax.net.ssl.keyStorePassword=iloveredis -Djavax.net.ssl.keyStore=$PWD/META-INF/clientKeyStore.p12 -Djavax.net.ssl.trustStoreType=JKS -Djavax.net.ssl.trustStorePassword=iloveredis -Djavax.net.ssl.trustStore=$PWD/META-INF/clientTrustStore.jks"
I also have a Spring Boot app in which I'm trying to connect to the Redis db using JedisConnectionFactory, and I'm not able to. (Using the same app, I am able to connect to a Redis db which does not have SSL enabled).
In my pom.xml:
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>2.1.4.RELEASE</version>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>
In my redis configuration file:
#Configuration
#EnableRedisRepositories
public class RedisConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisConfig = new RedisStandaloneConfiguration();
redisConfig.setHostName(redisInfo.getHost());
redisConfig.setPort(redisInfo.getPort());
redisConfig.setPassword(RedisPassword.of(redisInfo.getPassword()));
boolean useSsl = env.getProperty("spring.redis.ssl", Boolean.class);
JedisClientConfiguration jedisConfig;
if (useSsl) {
jedisConfig = JedisClientConfiguration
.builder()
.useSsl()
.build();
} else {
jedisConfig = JedisClientConfiguration
.builder()
.build();
}
JedisConnectionFactory jcf = new JedisConnectionFactory(redisConfig, jedisConfig);
return jcf;
}
#Bean
public RedisTemplate<?, ?> redisTemplateJedis() {
final RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory());
template.setValueSerializer(new Jackson2JsonRedisSerializer<Object>(Object.class));
return template;
}
This is the error I’m getting:
org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisDataException: ERR unencrypted connection is prohibited
One other point is that for testing purposes, both the server and app are using self-signed certificates (which work with Jedis).
I do not know how to configure JedisConfigurationFactory so that I don't get this error.
Edited.
To recap, I could connect to Redis with SSL enabled with Jedis the library, but not the Spring library JedisConnectionFactory.
I was trying this in Pivotal Cloud Foundry (PCF).
I wrote to Mark Paluch, author of spring-data-redis, and he suggested I turn off auto-reconfiguration to get it to work in PCF.
I found this page on turning off auto-reconfiguration:
https://docs.cloudfoundry.org/buildpacks/java/configuring-service-connections/spring-service-bindings.html#manual
Cloud Foundry will automatically create a RedisConnectionFactory bean for you, so my JedisConnectionFactory was not getting used.
I had to turn off auto-reconfiguration. Or rather turn on manual configuration.
My JedisConnectionFactory bean (with SSL enabled) then started getting instantiated (along with the cloud service connector’s RedisConnectionFactory bean).
And I had to set my JedisConnectionFactory bean to Primary as there were now two connection factory beans.
I was also getting exceptions about unexpected end of stream.
I had to turn on usePoolingin JedisClientConfiguration.
This is where I posted to jira about the issue (now moved to github):
https://github.com/spring-projects/spring-data-redis/issues/1542
We have a WebSphere JMS Queue and QueueConnectionFactory with provider as IBM MQ. we can not connect to IBM MQ directly.
I have the below configuration - I have bean jmsConnectionFactory that creates factory using InitialContext as expected. THE_QUEUE is JNDI name of my queue
<int-jms:inbound-channel-adapter channel="transformedChannel" connection-factory="jmsConnectionFactory"
destination-name="THE_QUEUE">
<int:poller fixed-delay="500" />
</int-jms:inbound-channel-adapter>
It is failing with error
Caused by: com.ibm.msg.client.jms.DetailedInvalidDestinationException:
JMSWMQ2008: Failed to open MQ queue 'THE_QUEUE'. JMS attempted to
perform an MQOPEN, but WebSphere MQ reported an error. Use the linked
exception to determine the cause of this error. Check that the
specified queue and queue manager are defined correctly.
My outbound channel configuration
<int-jms:outbound-channel-adapter id="respTopic"
connection-factory="jmsConnectionFactory"
destination-name="THE_REPLYQ" channel="process-channel"/>
If I use java code - it works
creating MessageProducer from session.createProducer and send message,
create MessageConsumer on queuesession.createConsumer(outQueue); and receive()
Please van you help, on how can I create jms inbound and outbound adapters for these queues using spring integration and process messages
EDIT:
#Bean
public ConnectionFactory jmsConnectionFactory(){
ConnectionFactory connectionFactory = null ;
Context ctx = null;
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
p.put(Context.PROVIDER_URL, "iiop://hostname.sl");
p.put("com.ibm.CORBA.ORBInit", "com.ibm.ws.sib.client.ORB");
try {
ctx = new InitialContext(p);
if (null != ctx)
System.out.println("Got naming context");
connectionFactory = (QueueConnectionFactory) ctx.lookup
("BDQCF");
}...
#Bean
public JmsListenerContainerFactory<?> mydbFactory(ConnectionFactory jmsConnectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, jmsConnectionFactory);
return factory;
}
THe code and configuration works for a queue that uses WebSphere default JMS provider
EDIT2 : Code added after comment
<int:channel id="jmsInputChannel" />
<jee:jndi-lookup id="naarconnectionFactory" jndi-name="MQ_QUEUE" resource-ref="false">
<jee:environment>
java.naming.factory.initial=com.ibm.websphere.naming.WsnInitialContextFactory
java.naming.provider.url=iiop://host.ws
</jee:environment>
</jee:jndi-lookup>
<int-jms:inbound-channel-adapter id="jmsIn" channel="jmsInputChannel"
connection-factory="jmsNAARConnectionFactory" destination-name="naarconnectionFactory">
<int:poller fixed-delay="500" />
</int-jms:inbound-channel-adapter>
You can't just use the JNDI name there - you must perform a JNDI lookup to resolve it to a Destination - see the Spring JMS Documentation.
I'm using Spring Boot and ActiveMQ. I want to send and receive messages from a topic. This is working well. My code looks like this:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {
JmsSpike.TestListener1.class,
JmsSpike.TestListener2.class,
JmsSpike.Config.class
})
#TestPropertySource(properties = {
"spring.activemq.broker-url: tcp://localhost:61616",
"spring.activemq.password: admin",
"spring.activemq.user: admin",
"spring.jms.pub-sub-domain: true", // queue vs. topic
})
#EnableJms
#EnableAutoConfiguration
public class JmsSpike {
#Autowired
private JmsTemplate jmsTemplate;
#Test
public void sendMessage() throws Exception {
sendMessageInThread();
Thread.sleep(10000);
}
private void sendMessageInThread() {
new Thread() {
public void run() {
jmsTemplate.convertAndSend("asx2ras", "I'm a test");
}
}.start();
}
#TestComponent
protected static class TestListener1 {
#JmsListener(destination = "asx2ras")
public void receiveMessage(String message) {
System.out.println("****************** 1 *******************");
System.out.println("Hey 1! I got a message: " + message);
System.out.println("****************** 1 *******************");
}
}
#TestComponent
protected static class TestListener2 {
#JmsListener(destination = "asx2ras")
public void receiveMessage(String message) {
throw new RuntimeException("Nope");
}
}
#Configuration
protected static class Config {
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy topicPolicy = new RedeliveryPolicy();
topicPolicy.setMaximumRedeliveries(1);
return topicPolicy;
}
#Bean
public ConnectionFactory connectionFactory(#Value("${spring.activemq.user}") final String username,
#Value("${spring.activemq.password}") final String password,
#Value("${spring.activemq.broker-url}") final String brokerUrl) {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory(username, password, brokerUrl);
cf.setRedeliveryPolicy(redeliveryPolicy());
return cf;
}
}
}
I can send a message
I receive the message with both listeners
One listener will always work and just print some console message
The other listener will always throw an exception
I set the retry to "1", so the failing listener will be called one more time after the exception was thrown. However, after the retry, the message is not delivered to an error queue (or error topic). How can I send the message to an error queue so that I can call the failing listener again later?
Note that I only want to call the failing listener again, not all listeners on the topic. Is that possible?
EDIT
Here's my activemq.xml (just the broker tag):
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
According to the official documentation, you can override the broker configurations on the client side:
The broker transmits the default delivery policy that he prefers to a
client connection in his BrokerInfo command packet. But the client can
override the policy settings by using the
ActiveMQConnection.getRedeliveryPolicy() method:
RedeliveryPolicy policy = connection.getRedeliveryPolicy();
policy.setInitialRedeliveryDelay(500); policy.setBackOffMultiplier(2);
policy.setUseExponentialBackOff(true);
policy.setMaximumRedeliveries(2);
So, the way you have configured the redelivery policy seems OK.
The only issue that I see is when you create a new instance of RedeliveryPolicy and set only a single field topicPolicy.setMaximumRedeliveries(1); all other fields that are primitives will be assigned default values. You should probably set the maximum redeliveries on the existing instance of redelivery policy:
RedeliveryPolicy policy = cf.getRedeliveryPolicy();
policy.setMaximumRedeliveries(1);
Edit
Also, make sure that by using #JmsListener is not using CLIENT_ACKNOWLEDGE. According to this thread the messages will not get redelivered when CLIENT_ACKNOWLEDGE is used.
I have a HornetQ standalone cluster with 2 nodes and Mule ESB is configured to connect HornetQ Cluster discovery group.
I wrote a Mule Callback Listener which is consuming messages from HornetQ Cluster. But 2nd node in cluster is behaving like a backup or failover server. When i send multiple messages its always delivered to node1 only. When i stop node1 then node2 is picking the messages. JConsole always shows number of consumers as 0 for node2. Whereas node1 has 6 consumer as defined in Mule JMS connector->numberOfConsumers property as shown in below configuration.
<jms:connector name="hornetq-connector" username="guest"
maxRedelivery="5" password="guest" specification="1.1"
connectionFactory-ref="connectionFactory" numberOfConsumers="6" >
<spring:property name="retryPolicyTemplate" ref="ThreadingPolicyTemplate" />
</jms:connector>
I have only one consumer that is defined in mule flow as shown below.
<flow name="Flow2" doc:name="Flow2">
<jms:inbound-endpoint queue="InboundQueue"
connector-ref="hornetq-connector">
<jms:transaction action="ALWAYS_BEGIN" timeout="10000" />
</jms:inbound-endpoint>
<component class="com.test.Consumer" />
</flow>
Mule Callback java class is added as component as shown in above code (). Please find the code below.
public class Consumer implements org.mule.api.lifecycle.Callable{
private static DateFormat df1 = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
// TODO Auto-generated method stub
System.out.println(df1.format(new Date().getTime())+"Consumer:Message Received,"+ Thread.currentThread()+ "," + eventContext.getMessageAsString());
Thread.sleep(5000);
System.out.println(df1.format(new Date().getTime())+"Consumer: Message process complete, "+ Thread.currentThread()+ "," + eventContext.getMessageAsString());
return "Success";
}
}
How to distribute this one consumer across all the HornetQ clustered nodes?