how to see the message which was consumed by a consumer in ActiveMQ JMS using a JMX console -jConsole?
to do this you can browse topics ActiveMQ.Advisory.MessageConsumed.Topic.YourTopicName or for Queue ActiveMQ.Advisory.MessageConsumed.Queue.YourQueueName by using jconsole or visualVM, this is possible by Advisory Message http://activemq.apache.org/advisory-message.html and you need to enable this in the broker config by adding this :
<destinationPolicy>
<policyMap>
<policyEntries>
<!-- http://activemq.apache.org/advisory-message.html -->
<policyEntry topic=">" advisoryForConsumed="true" />
<policyEntry queue=">" advisoryForConsumed="true" />
</policyEntries>
</policyMap>
</destinationPolicy>
code to browse advisory messages.
Destination advisoryDestination = AdvisorySupport.getMessageDeliveredAdvisoryTopic(destination);
Destination advisoryDestination = AdvisorySupport.getMessageDiscardedAdvisoryTopic(destination);
Destination advisoryDestination = AdvisorySupport.getMessageConsumedAdvisoryTopic(destination);
MessageConsumer consumer = session.createConsumer(advisoryDestination);
consumer.setMessageListener(this);
public void onMessage(Message msg){
String messageId = msg.getJMSMessageID();
String orignalMessageId = msg.getStringProperty(org.apache.activemq.advisory.AdvisorySupport.MSG_PROPERTY_MESSAGE_ID);
if (msg instanceof ActiveMQMessage){
try {
ActiveMQMessage aMsg = (ActiveMQMessage)msg;
ConsumerInfo consumerInfo = (ConsumerInfo) aMsg.getDataStructure();
} catch (JMSException e) {
log.error("Failed to process message: " + msg);
}
}
}
Related
I'm working with ActiveMQ Artemis 2.17 and Spring Boot 2.5.7.
I'm publishing messages on topic and queue and consuming it. All is done through JMS.
All queues (anycast or multicast) are durables. My topic (multicast address) has 2 durable queues in order to have 2 separate consumers.
For my topic, the 2 consumers use durable and shared subscriptions (JMS 2.0).
All processing is transactional, managed through Atomikos transaction manager (I need it for a 2 phases commit with the database).
I have a problem with the redelivery policy and DLQ. When I throw an exception during the processing of the message, the redelivery policy applies correctly for the queue (anycast queue) and a DLQ is created with the message. However, for the topic (multicast queue), the redelivery policy does not apply and the message is not sent into a DLQ.
Here is my ActiveMQ Artemis broker configuration:
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<!-- Codec and key used to encode the passwords -->
<!-- TODO : set master-password configuration into the Java code -->
<password-codec>org.apache.activemq.artemis.utils.DefaultSensitiveStringCodec;key=UBNTd0dS9w6f8HDIyGW9
</password-codec>
<!-- Configure the persistence into a database (postgresql) -->
<persistence-enabled>true</persistence-enabled>
<store>
<database-store>
<bindings-table-name>BINDINGS_TABLE</bindings-table-name>
<message-table-name>MESSAGE_TABLE</message-table-name>
<page-store-table-name>PAGE_TABLE</page-store-table-name>
<large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name>
<node-manager-store-table-name>NODE_MANAGER_TABLE</node-manager-store-table-name>
<jdbc-lock-renew-period>2000</jdbc-lock-renew-period>
<jdbc-lock-expiration>20000</jdbc-lock-expiration>
<jdbc-journal-sync-period>5</jdbc-journal-sync-period>
<!-- Configure a connection pool -->
<data-source-properties>
<data-source-property key="driverClassName" value="org.postgresql.Driver"/>
<data-source-property key="url" value="jdbc:postgresql://localhost/artemis"/>
<data-source-property key="username" value="postgres"/>
<data-source-property key="password" value="ENC(-3eddbe9664c85ec7ed63588b000486cb)"/>
<data-source-property key="poolPreparedStatements" value="true"/>
<data-source-property key="initialSize" value="2"/>
<data-source-property key="minIdle" value="1"/>
</data-source-properties>
</database-store>
</store>
<!-- Configure the addresses, queues and topics default behaviour -->
<!-- See: https://activemq.apache.org/components/artemis/documentation/latest/address-model.html -->
<address-settings>
<address-setting match="#">
<dead-letter-address>DLA</dead-letter-address>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix/>
<dead-letter-queue-suffix>.DLQ</dead-letter-queue-suffix>
<expiry-address>ExpiryQueue</expiry-address>
<auto-create-expiry-resources>false</auto-create-expiry-resources>
<expiry-queue-prefix/>
<expiry-queue-suffix>.EXP</expiry-queue-suffix>
<expiry-delay>-1</expiry-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<redelivery-delay>250</redelivery-delay>
<redelivery-delay-multiplier>2.0</redelivery-delay-multiplier>
<redelivery-collision-avoidance-factor>0.5</redelivery-collision-avoidance-factor>
<max-redelivery-delay>10000</max-redelivery-delay>
<max-size-bytes>100000</max-size-bytes>
<page-size-bytes>20000</page-size-bytes>
<page-max-cache-size>5</page-max-cache-size>
<max-size-bytes-reject-threshold>-1</max-size-bytes-reject-threshold>
<address-full-policy>PAGE</address-full-policy>
<message-counter-history-day-limit>0</message-counter-history-day-limit>
<default-last-value-queue>false</default-last-value-queue>
<default-non-destructive>false</default-non-destructive>
<default-exclusive-queue>false</default-exclusive-queue>
<default-consumers-before-dispatch>0</default-consumers-before-dispatch>
<default-delay-before-dispatch>-1</default-delay-before-dispatch>
<redistribution-delay>0</redistribution-delay>
<send-to-dla-on-no-route>true</send-to-dla-on-no-route>
<slow-consumer-threshold>-1</slow-consumer-threshold>
<slow-consumer-policy>NOTIFY</slow-consumer-policy>
<slow-consumer-check-period>5</slow-consumer-check-period>
<!-- We disable the automatic creation of queue or topic -->
<auto-create-queues>false</auto-create-queues>
<auto-delete-queues>true</auto-delete-queues>
<auto-delete-created-queues>false</auto-delete-created-queues>
<auto-delete-queues-delay>30000</auto-delete-queues-delay>
<auto-delete-queues-message-count>0</auto-delete-queues-message-count>
<config-delete-queues>OFF</config-delete-queues>
<!-- We disable the automatic creation of address -->
<auto-create-addresses>false</auto-create-addresses>
<auto-delete-addresses>true</auto-delete-addresses>
<auto-delete-addresses-delay>30000</auto-delete-addresses-delay>
<config-delete-addresses>OFF</config-delete-addresses>
<management-browse-page-size>200</management-browse-page-size>
<default-purge-on-no-consumers>false</default-purge-on-no-consumers>
<default-max-consumers>-1</default-max-consumers>
<default-queue-routing-type>ANYCAST</default-queue-routing-type>
<default-address-routing-type>ANYCAST</default-address-routing-type>
<default-ring-size>-1</default-ring-size>
<retroactive-message-count>0</retroactive-message-count>
<enable-metrics>true</enable-metrics>
<!-- We automatically force group rebalance and a dispatch pause during group rebalance -->
<default-group-rebalance>true</default-group-rebalance>
<default-group-rebalance-pause-dispatch>true</default-group-rebalance-pause-dispatch>
<default-group-buckets>1024</default-group-buckets>
</address-setting>
</address-settings>
<!-- Define the protocols accepted -->
<!-- See: https://activemq.apache.org/components/artemis/documentation/latest/protocols-interoperability.html -->
<acceptors>
<!-- Acceptor for only CORE protocol -->
<!-- We enable the cache destination as recommended into the documentation. See: https://activemq.apache.org/components/artemis/documentation/latest/using-jms.html -->
<acceptor name="artemis">
tcp://0.0.0.0:61616?protocols=CORE,tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;useEpoll=true,cacheDestinations=true
</acceptor>
</acceptors>
<!-- Define how to connect to another broker -->
<!-- TODO : est-ce utile ? -->
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
<connector name="broker1-connector">tcp://localhost:61616</connector>
<connector name="broker2-connector">tcp://localhost:61617</connector>
</connectors>
<!-- Configure the High-Availability and broker cluster for the high-availability -->
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<cluster-connections>
<cluster-connection name="gerico-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker1-connector</connector-ref>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<!-- <cluster-user>cluster_user</cluster-user>-->
<!-- <cluster-password>cluster_user_password</cluster-password>-->
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>72000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<!-- Security configuration -->
<security-enabled>false</security-enabled>
<!-- Addresses and queues configuration -->
<!-- !!! DON'T FORGET TO UPDATE 'slave-broker.xml' FILE !!! -->
<addresses>
<address name="topic.test_rde">
<multicast>
<queue name="rde_receiver_1">
<durable>true</durable>
</queue>
<queue name="rde_receiver_2">
<durable>true</durable>
</queue>
</multicast>
</address>
<address name="queue.test_rde">
<anycast>
<queue name="queue.test_rde">
<durable>true</durable>
</queue>
</anycast>
</address>
</addresses>
</core>
</configuration>
The JMS configuration into the Spring Boot is the following:
#Bean
public DynamicDestinationResolver destinationResolver() {
return new DynamicDestinationResolver() {
#Override
public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException {
if (destinationName.startsWith("topic.")) {
pubSubDomain = true;
}
else
pubSubDomain =false;
return super.resolveDestinationName(session, destinationName, pubSubDomain);
}
};
}
#Bean
public JmsListenerContainerFactory<?> queueConnectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer,
JmsErrorHandler jmsErrorHandler) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(false);
factory.setSessionTransacted(true);
factory.setErrorHandler(jmsErrorHandler);
return factory;
}
#Bean
public JmsListenerContainerFactory<?> topicConnectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer,
JmsErrorHandler jmsErrorHandler) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
factory.setPubSubDomain(true);
factory.setSessionTransacted(true);
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true);
factory.setErrorHandler(jmsErrorHandler);
return factory;
}
The messages publisher:
#GetMapping("api/send")
public void sendDataToJms() throws InterruptedException {
OrderRest currentService = this.applicationContext.getBean(OrderRest.class);
for (Long i = 0L; i < 1L; i++) {
currentService.sendTopicMessage(i);
currentService.sendQueueMessage(i);
Thread.sleep(200L);
}
}
#Transactional
public void sendTopicMessage(Long id) {
Order myMessage = new Order("--" + id.toString() + "--", new Date());
jmsTemplate.convertAndSend("topic.test_rde", myMessage);
}
#Transactional
public void sendQueueMessage(Long id) {
Order myMessage = new Order("--" + id.toString() + "--", new Date());
jmsTemplate.convertAndSend("queue.test_rde", myMessage);
}
And the listeners:
#Transactional
#JmsListener(destination = "topic.test_rde", containerFactory = "topicConnectionFactory", subscription = "rde_receiver_1")
public void receiveMessage_rde_1(#Payload Order order, #Headers MessageHeaders headers, Message message, Session session) {
log.info("---> Consumer1 - rde_receiver_1 - " + order.getContent());
throw new ValidationException("Voluntary exception", "entity", List.of(), List.of());
}
#Transactional
#JmsListener(destination = "queue.test_rde", containerFactory = "queueConnectionFactory")
public void receiveMessage_rde_queue(#Payload Order order, #Headers MessageHeaders headers, Message message, Session session) {
log.info("---> Consumer1 - rde_receiver_queue - " + order.getContent());
throw new ValidationException("Voluntary exception", "entity", List.of(), List.of());
}
Is it normal that the redelivery policy and the DLQ mechanismd do only apply for a queue (anycat queue)?
Is it possible to also apply it for topic (multicast queue) and shared durable subscriptions?
If not, how can I have a topic behaviour but with redelivery and DLQ mechanisms? Should I use the divert solution of ActiveMQ?
Thank a lot for your help.
Regards.
I have found the problem. It comes from Atomikos that does not support the JMS v2.0 but only JMS 1.1.
So, it's not possible to get a shared durable subscription on a topic that support at the same time the 2-PC and the redelivery policy.
We have a WebSphere JMS Queue and QueueConnectionFactory with provider as IBM MQ. we can not connect to IBM MQ directly.
I have the below configuration - I have bean jmsConnectionFactory that creates factory using InitialContext as expected. THE_QUEUE is JNDI name of my queue
<int-jms:inbound-channel-adapter channel="transformedChannel" connection-factory="jmsConnectionFactory"
destination-name="THE_QUEUE">
<int:poller fixed-delay="500" />
</int-jms:inbound-channel-adapter>
It is failing with error
Caused by: com.ibm.msg.client.jms.DetailedInvalidDestinationException:
JMSWMQ2008: Failed to open MQ queue 'THE_QUEUE'. JMS attempted to
perform an MQOPEN, but WebSphere MQ reported an error. Use the linked
exception to determine the cause of this error. Check that the
specified queue and queue manager are defined correctly.
My outbound channel configuration
<int-jms:outbound-channel-adapter id="respTopic"
connection-factory="jmsConnectionFactory"
destination-name="THE_REPLYQ" channel="process-channel"/>
If I use java code - it works
creating MessageProducer from session.createProducer and send message,
create MessageConsumer on queuesession.createConsumer(outQueue); and receive()
Please van you help, on how can I create jms inbound and outbound adapters for these queues using spring integration and process messages
EDIT:
#Bean
public ConnectionFactory jmsConnectionFactory(){
ConnectionFactory connectionFactory = null ;
Context ctx = null;
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
p.put(Context.PROVIDER_URL, "iiop://hostname.sl");
p.put("com.ibm.CORBA.ORBInit", "com.ibm.ws.sib.client.ORB");
try {
ctx = new InitialContext(p);
if (null != ctx)
System.out.println("Got naming context");
connectionFactory = (QueueConnectionFactory) ctx.lookup
("BDQCF");
}...
#Bean
public JmsListenerContainerFactory<?> mydbFactory(ConnectionFactory jmsConnectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, jmsConnectionFactory);
return factory;
}
THe code and configuration works for a queue that uses WebSphere default JMS provider
EDIT2 : Code added after comment
<int:channel id="jmsInputChannel" />
<jee:jndi-lookup id="naarconnectionFactory" jndi-name="MQ_QUEUE" resource-ref="false">
<jee:environment>
java.naming.factory.initial=com.ibm.websphere.naming.WsnInitialContextFactory
java.naming.provider.url=iiop://host.ws
</jee:environment>
</jee:jndi-lookup>
<int-jms:inbound-channel-adapter id="jmsIn" channel="jmsInputChannel"
connection-factory="jmsNAARConnectionFactory" destination-name="naarconnectionFactory">
<int:poller fixed-delay="500" />
</int-jms:inbound-channel-adapter>
You can't just use the JNDI name there - you must perform a JNDI lookup to resolve it to a Destination - see the Spring JMS Documentation.
I'm using Spring Boot and ActiveMQ. I want to send and receive messages from a topic. This is working well. My code looks like this:
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {
JmsSpike.TestListener1.class,
JmsSpike.TestListener2.class,
JmsSpike.Config.class
})
#TestPropertySource(properties = {
"spring.activemq.broker-url: tcp://localhost:61616",
"spring.activemq.password: admin",
"spring.activemq.user: admin",
"spring.jms.pub-sub-domain: true", // queue vs. topic
})
#EnableJms
#EnableAutoConfiguration
public class JmsSpike {
#Autowired
private JmsTemplate jmsTemplate;
#Test
public void sendMessage() throws Exception {
sendMessageInThread();
Thread.sleep(10000);
}
private void sendMessageInThread() {
new Thread() {
public void run() {
jmsTemplate.convertAndSend("asx2ras", "I'm a test");
}
}.start();
}
#TestComponent
protected static class TestListener1 {
#JmsListener(destination = "asx2ras")
public void receiveMessage(String message) {
System.out.println("****************** 1 *******************");
System.out.println("Hey 1! I got a message: " + message);
System.out.println("****************** 1 *******************");
}
}
#TestComponent
protected static class TestListener2 {
#JmsListener(destination = "asx2ras")
public void receiveMessage(String message) {
throw new RuntimeException("Nope");
}
}
#Configuration
protected static class Config {
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy topicPolicy = new RedeliveryPolicy();
topicPolicy.setMaximumRedeliveries(1);
return topicPolicy;
}
#Bean
public ConnectionFactory connectionFactory(#Value("${spring.activemq.user}") final String username,
#Value("${spring.activemq.password}") final String password,
#Value("${spring.activemq.broker-url}") final String brokerUrl) {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory(username, password, brokerUrl);
cf.setRedeliveryPolicy(redeliveryPolicy());
return cf;
}
}
}
I can send a message
I receive the message with both listeners
One listener will always work and just print some console message
The other listener will always throw an exception
I set the retry to "1", so the failing listener will be called one more time after the exception was thrown. However, after the retry, the message is not delivered to an error queue (or error topic). How can I send the message to an error queue so that I can call the failing listener again later?
Note that I only want to call the failing listener again, not all listeners on the topic. Is that possible?
EDIT
Here's my activemq.xml (just the broker tag):
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
According to the official documentation, you can override the broker configurations on the client side:
The broker transmits the default delivery policy that he prefers to a
client connection in his BrokerInfo command packet. But the client can
override the policy settings by using the
ActiveMQConnection.getRedeliveryPolicy() method:
RedeliveryPolicy policy = connection.getRedeliveryPolicy();
policy.setInitialRedeliveryDelay(500); policy.setBackOffMultiplier(2);
policy.setUseExponentialBackOff(true);
policy.setMaximumRedeliveries(2);
So, the way you have configured the redelivery policy seems OK.
The only issue that I see is when you create a new instance of RedeliveryPolicy and set only a single field topicPolicy.setMaximumRedeliveries(1); all other fields that are primitives will be assigned default values. You should probably set the maximum redeliveries on the existing instance of redelivery policy:
RedeliveryPolicy policy = cf.getRedeliveryPolicy();
policy.setMaximumRedeliveries(1);
Edit
Also, make sure that by using #JmsListener is not using CLIENT_ACKNOWLEDGE. According to this thread the messages will not get redelivered when CLIENT_ACKNOWLEDGE is used.
I have this requirement to push messages to a single queue but receive them at more than one place (a java class).
For that I used a topic-exchange and bound it to a queue that receives messages based on different patterns. Now when I try to receive them at the other end the listener-container send the messages to both the listeners rather than bifurcating them based on the pattern decided.
I have attached the configuration and the code below for a quick look
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:rabbit="http://www.springframework.org/schema/rabbit"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/rabbit http://www.springframework.org/schema/rabbit/spring-rabbit.xsd">
<rabbit:connection-factory id="connectionFactory" host="localhost" username="guest" password="guest" />
<rabbit:admin connection-factory="connectionFactory" />
<rabbit:template id="pgTemplate" connection-factory="connectionFactory" exchange="PG-EXCHANGE"/>
<rabbit:queue id="pgQueue" />
<rabbit:topic-exchange id="pgExchange" name="PG-EXCHANGE">
<rabbit:bindings>
<rabbit:binding queue="pgQueue" pattern="pg"></rabbit:binding>
<rabbit:binding queue="pgQueue" pattern="txn"></rabbit:binding>
</rabbit:bindings>
</rabbit:topic-exchange>
<bean id="pgReceiver" class="com.pg.mq.service.impl.MessageReceiver"/>
<bean id="txnReceiver" class="com.pg.mq.service.impl.MessageReceiver"/>
<rabbit:listener-container id="ListenerContainer" connection-factory="connectionFactory">
<rabbit:listener ref="pgReceiver" queues="pgQueue" method="handleMessage"/>
<rabbit:listener ref="txnReceiver" queues="pgQueue" method="handleMessage"/>
</rabbit:listener-container>
</beans>
Message Sender
public String pushMessagePG(Object object) {
if(object != null && object instanceof Rabbit)
pgTemplate.convertAndSend("pg", object); // send
return null;
}
Message Receivers
PG RECEIVER
public void handleMessage(Rabbit message) {
System.out.println("inside Message Receiver");
System.out.println("Listener received message----->" + message);
}
TXN RECEIVER
public void handleMessage(Rabbit message) {
System.out.println("inside TXN Message Receiver");
System.out.println("Listener received message----->" + message);
}
Calling Code
Sender sender = (Sender) context.getBean("messageSender");
for(int i = 0; i < 30; i++) sender.pushMessagePG(new Rabbit(5));
Thread.sleep(2000);
for(int i = 0; i < 30; i++) sender.pushMessageTXN(new Rabbit(2));
Sorry, but you didn't understand the AMQP specification a bit.
Messages are sent to the exchange by the routingKey
Queues are bound to the exchange by the routingKey or pattern for topic-exchange
Only single consumer can receive a message from queue.
There is no mechanism like distribution in the AMQP. I mean something similar like Topic concept in JMS.
topic-exchange puts messages to all queues which routingKey matches to patterns of the bindings.
This article describes that in the best way.
Since you bind the same queue for both patterns all messages are placed to the same queue.
From other consumers are listen on the queues and just only queues. They knows nothing about the routingKey.
Since you have two listeners for the same queue it isn't surprise that both of them can handle messages from that queue and it is independently of the routingKey.
You really should use different queues for different patterns. Or consider to use Spring Integration with its Message Routing power. You can use the standard AmqpHeaders.RECEIVED_ROUTING_KEY (Spring Integration) header for that case.
I am currently trying to create a JMS client for a JMS Server both using HornetQ. I did not code the server and I don't know much about hoy it works, I only know how to connect to it: no username, no password and the address is jnp://x.y.z.t:1099 .
I am trying to connect to the server without using JNDI and I am having some trouble. In fact I have found this example: http://anonsvn.jboss.org/repos/hornetq/trunk/examples/jms/instantiate-connection-factory/
and I don't know what do I need to change in the XML files and in the code to make it work.
I had this code, with JNDI:
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* A simple polymorphic JMS producer which can work with Queues or Topics which
* uses JNDI to lookup the JMS connection factory and destination
*
*
*/
public final class SimpleProducer {
private static final Logger LOG = LoggerFactory.getLogger(SimpleProducer.class);
private SimpleProducer() {
}
/**
* #param args the destination name to send to and optionally, the number of
* messages to send
*/
public static void main(String[] args) {
Context jndiContext = null;
ConnectionFactory connectionFactory = null;
Connection connection = null;
Session session = null;
Destination destination = null;
MessageProducer producer = null;
String destinationName = null;
final int numMsgs;
if ((args.length < 1) || (args.length > 2)) {
LOG.info("Usage: java SimpleProducer <destination-name> [<number-of-messages>]");
System.exit(1);
}
destinationName = args[0];
LOG.info("Destination name is " + destinationName);
if (args.length == 2) {
numMsgs = (new Integer(args[1])).intValue();
} else {
numMsgs = 1;
}
/*
* Create a JNDI API InitialContext object
*/
try {
jndiContext = new InitialContext();
} catch (NamingException e) {
LOG.info("Could not create JNDI API context: " + e.toString());
System.exit(1);
}
/*
* Look up connection factory and destination.
*/
try {
connectionFactory = (ConnectionFactory)jndiContext.lookup("ConnectionFactory");
destination = (Destination)jndiContext.lookup(destinationName);
} catch (NamingException e) {
LOG.info("JNDI API lookup failed: " + e);
System.exit(1);
}
/*
* Create connection. Create session from connection; false means
* session is not transacted. Create sender and text message. Send
* messages, varying text slightly. Send end-of-messages message.
* Finally, close connection.
*/
try {
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(destination);
TextMessage message = session.createTextMessage();
for (int i = 0; i < numMsgs; i++) {
message.setText("This is message " + (i + 1));
LOG.info("Sending message: " + message.getText());
producer.send(message);
}
/*
* Send a non-text control message indicating end of messages.
*/
producer.send(session.createMessage());
} catch (JMSException e) {
LOG.info("Exception occurred: " + e);
} finally {
if (connection != null) {
try {
connection.close();
} catch (JMSException e) {
}
}
}
}
}
with this jndi.properties file:
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces
java.naming.provider.url = jnp://x.y.z.t:1099
and everything worked fine. But now I need to do the same thing without JNDI.
The example I gave above (http://anonsvn.jboss.org/repos/hornetq/trunk/examples/jms/instantiate-connection-factory/) shoutd work, but I don't know what to change in the config to make it work, I have never used a JMS client this way, so I'm completelly lost!
These are the config files I'm talking about:
http://anonsvn.jboss.org/repos/hornetq/trunk/examples/jms/instantiate-connection-factory/server0/ . I coulnd't find on the net to what the files correpsond, I'm confused.
Also, the Java code is here:
http://anonsvn.jboss.org/repos/hornetq/trunk/examples/jms/instantiate-connection-factory/src/org/hornetq/jms/example/InstantiateConnectionFactoryExample.java
Thank you in advance
----- EDIT
This is the last vesion of my code:
import java.util.HashMap;
import java.util.Map;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.TextMessage;
import org.hornetq.api.core.TransportConfiguration;
import org.hornetq.api.jms.HornetQJMSClient;
import org.hornetq.api.jms.JMSFactoryType;
import org.hornetq.common.example.HornetQExample;
import org.hornetq.core.remoting.impl.netty.NettyConnectorFactory;
import org.hornetq.core.remoting.impl.netty.TransportConstants;
import org.hornetq.jms.client.HornetQConnectionFactory;
public class Snippet extends HornetQExample
{
public static void main(final String[] args)
{
new Snippet().run(args);
}
#Override
public boolean runExample() throws Exception
{
Connection connection = null;
try
{
// Step 1. Directly instantiate the JMS Queue object.
Queue queue = HornetQJMSClient.createQueue("exampleQueue");
// Step 2. Instantiate the TransportConfiguration object which contains the knowledge of what transport to use,
// The server port etc.
Map<String, Object> connectionParams = new HashMap<String, Object>();
connectionParams.put(TransportConstants.PORT_PROP_NAME, 5446);
//My server's port:
//connectionParams.put(TransportConstants.PORT_PROP_NAME, 1099);
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName(),
connectionParams);
// Step 3 Directly instantiate the JMS ConnectionFactory object using that TransportConfiguration
HornetQConnectionFactory cf = HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration);
// Step 4.Create a JMS Connection
connection = cf.createConnection();
// Step 5. Create a JMS Session
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 6. Create a JMS Message Producer
MessageProducer producer = session.createProducer(queue);
// Step 7. Create a Text Message
TextMessage message = session.createTextMessage("This is a text message");
System.out.println("Sent message: " + message.getText());
// Step 8. Send the Message
producer.send(message);
// Step 9. Create a JMS Message Consumer
MessageConsumer messageConsumer = session.createConsumer(queue);
// Step 10. Start the Connection
connection.start();
// Step 11. Receive the message
TextMessage messageReceived = (TextMessage)messageConsumer.receive(5000);
System.out.println("Received message: " + messageReceived.getText());
return true;
}
finally
{
if (connection != null)
{
connection.close();
}
}
}
}
This is my hornetq-beans.xml (I disable JNDI)
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<!-- JNDI server. Disable this if you don't want JNDI -->
<!-- <bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">1099</property>
<!-- <property name="bindAddress">localhost</property>
<property name="bindAddress">jnp://X.Y.Z.T</property>
<property name="rmiPort">1098</property>
<!-- <property name="rmiBindAddress">localhost</property>
<property name="bindAddress">jnp://X.Y.Z.T</property>
</bean>-->
<!-- MBean server -->
<bean name="MBeanServer" class="javax.management.MBeanServer">
<constructor factoryClass="java.lang.management.ManagementFactory"
factoryMethod="getPlatformMBeanServer"/>
</bean>
<!-- The core configuration -->
<bean name="Configuration" class="org.hornetq.core.config.impl.FileConfiguration"/>
<!-- The security manager -->
<bean name="HornetQSecurityManager" class="org.hornetq.spi.core.security.HornetQSecurityManagerImpl">
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The core server -->
<bean name="HornetQServer" class="org.hornetq.core.server.impl.HornetQServerImpl">
<constructor>
<parameter>
<inject bean="Configuration"/>
</parameter>
<parameter>
<inject bean="MBeanServer"/>
</parameter>
<parameter>
<inject bean="HornetQSecurityManager"/>
</parameter>
</constructor>
<start ignored="true"/>
<stop ignored="true"/>
</bean>
<!-- The JMS server -->
<bean name="JMSServerManager" class="org.hornetq.jms.server.impl.JMSServerManagerImpl">
<constructor>
<parameter>
<inject bean="HornetQServer"/>
</parameter>
</constructor>
</bean>
</deployment>
and my hornetq-jms.xml (same as in the example)
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<!--the queue used by the example-->
<queue name="exampleQueue">
<entry name="/queue/exampleQueue"/>
</queue>
</configuration>
hornetq-users.xml (There is no need to have users to connect to the JMS server, so I commented it):
<configuration xmlns="urn:hornetq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-users.xsd">
<!-- the default user. this is used where username is null
<defaultuser name="guest" password="guest">
<role name="guest"/>
</defaultuser>-->
</configuration>
My hornetq-configuratio.xml: I am not sur whether I have to put jnp:// in the connectors and acceptors or not. Actually I don't really get much of this....
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<!-- Connectors -->
<connectors>
<connector name="netty-connector">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="host" value="jnp://X.Y.Z.T"/>
<param key="port" value="5445"/>
</connector>
</connectors>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="host" value="jnp://X.Y.Z.T"/>
<param key="port" value="5445"/>
</acceptor>
</acceptors>
<!-- Other config -->
<security-settings>
<!--security for example queue-->
<security-setting match="jms.queue.exampleQueue">
<permission type="createDurableQueue" roles="guest"/>
<permission type="deleteDurableQueue" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="send" roles="guest"/>
</security-setting>
</security-settings>
</configuration>
What I get with this code is:
HornetQException[errorCode=2 message=Cannot connect to server(s). Tried with all available servers.]
at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:619)
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:601)
I am using version 2.2.2, by the way ;)
If you want to use HornetQ's native client, look at these documents
http://docs.jboss.org/hornetq/2.2.2.Final/user-manual/en/html_single/#d0e1611
This will likely also help, with direct connections
http://docs.jboss.org/hornetq/2.2.2.Final/user-manual/en/html_single/#d0e2387
You'll want to use their local core client examples.