I am using ActiveMQ Artemis 2.19.1. I created producer and consumer apps using Spring Boot. I need multiple instances of the consumer to receive all the messages (multicast). I configured a Last Value Queue like this (broker.xml):
<address-settings>
<address-setting match="quote.#">
<max-size-bytes>1000000000</max-size-bytes> <!-- 1GB -->
<address-full-policy>BLOCK</address-full-policy>
<default-last-value-key>symbol</default-last-value-key>
<default-last-value-queue>true</default-last-value-queue>
<default-non-destructive>true</default-non-destructive>
</address-setting>
...
</address-settings>
Sending is like this and appears to work correctly. "symbol" is the VLQ key.
import org.springframework.jms.core.JmsTemplate;
#Service
public class DispatcherService {
#Autowired
JmsTemplate jmsTemplate;
public void sendMessageA(String message) {
jmsTemplate.convertAndSend(jmsQueue, message, m-> {
m.setStringProperty("symbol", "ABC");
return m;
});
}
If Spring Boot applicaiton.properties has:
spring.jms.pub-sub-domain=true
...then all clients receive all messages when published (good). However, the most recent message is not published to new clients when they start and subscribe to the topic.
If instead using:
spring.jms.pub-sub-domain=false
I can see the last message remains in the Last Value Queue (good) and connecting consumers get the last msg. However as messages are published they're distributed round-robin (anycast), not all messages to all consumers.
How can I make sure clients connecting to a LVQ receive the most recent message then all future messages, not just a round-robin distribution of future messages?
EDIT:
Doing this works. Just leave spring.jms.pub-sub-domain=true and set retroactive-message-count greater than the number of symbols that may be encountered otherwise some will not be retained:
<address-setting match="quotes">
<retroactive-message-count>100000</retroactive-message-count>
</address-setting>
<address-setting match="*.*.*.quotes.*.retro">
<default-last-value-key>symbol</default-last-value-key>
</address-setting>
It sounds to me like everything is working as designed. I believe your expectations are being thwarted because you're using pub/sub (i.e. JMS topics).
Let me provide a bit of background. When a JMS client creates a subscription on a topic the broker responds by creating a multicast queue on the address with the same name. The queue is named according to the kind of subscription it is. If it is a non-durable subscription then the queue is named with a UUID. If it is a durable subscription then the queue is named according to the subscription name provided by the client and the client ID (if available). When a message is sent to the address it is put in all the multicast queues bound to that address.
Therefore, when a new non-durable subscription is created a new queue for that subscription is also created which means that the subscriber will receive none of the messages sent to the topic prior to the creation of the subscription. This is the expected behavior for JMS topics (i.e. normal pub/sub semantics). Also, since the queue for a non-durable subscription is only available while the subscriber is connected that means there's no way to enforce LVQ semantics since any message which arrives in the queue will be immediately dispatched to the consumer. In short, LVQ with JMS topics doesn't make a lot of sense.
The behavior changes when you use a JMS queue because the queue is always there to receive messages. Consumers can come and go as they please while the broker enforces LVQ semantics.
One possible solution would be to create a special "initialization" queue where consumers could initially connect to get the latest information and after that they could subscribe to the JMS topic to get the pub/sub semantics you need. You could use a divert to make this transparent for the applications sending the messages so they can continue to just send to the JMS topic. Here's sample configuration:
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core">
...
<diverts>
<divert name="myDivert">
<address>myTopic</address>
<forwarding-address>initQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
</diverts>
...
<addresses>
<address name="myTopic">
<multicast/>
</address>
<address name="initQueue">
<anycast>
<queue name="initQueue" last-value-key="symbol" non-destructive="true" />
</anycast>
</address>
...
</addresses>
</core>
</configuration>
Using this configuration every message send to the JMS topic myTopic will transparently sent to initQueue as well. This queue will keep only the most up-to-date messages since it using last-value semantics. Also, those up-to-date messages will stay in the queue for any subsequent consumer since the queue is non-destructive.
The only difficulty I anticipate here is with Spring which may not provide you with the flexibility to create the initial queue consumer and then create a topic subscriber. If you used the JMS API directly this would be a relatively simple matter.
Another potential solution would be to use retroactive addresses. The main thing to do here would be to ensure the internal ring queues were LVQs. You can do that with the default-last-value-key address-setting. See the documentation details on the match to use.
Related
I am trying to listen message through Spring boot application using IBM MQ topic subscription.
Available info (Provided by MQ Admin):
Topic name
Host
Port
QueueManager
BrokerDurableSubscriptionQueue
I am trying to set BrokerDurableSubscriptionQueue property in MQConnectionFactory.
I can find mqConnectionFactory.setBrokerSubQueue(queueName) which I guess can be used for Non-Durable Subscription.
But I cannot find similar property for Durable subscription.
However I can see MQTopic class has setBrokerDurSubQueue property but I am not sure how can I make use of MQTopic object in my case.
I am using below code:
MQConnectionFactory:
#Bean
public MQTopicConnectionFactory topicConnectionFactory(){
MQTopicConnectionFactory mqTopicConnectionFactory= new MQConnectionFactory();
mqTopicConnectionFactory.setHostName(); //mq host name
mqTopicConnectionFactory.setPort(); // mq port
mqTopicConnectionFactory.setQueueManager(); //mq queue manager
mqTopicConnectionFactory.setChannel(); //mq channel name
mqTopicConnectionFactory.setTransportType(1);
mqTopicConnectionFactory.setSSLCipherSuite(); //tls cipher suite name
return mqTopicConnectionFactory;
}
#Bean
public JmsListenerContainerFactory<?> topicListenerFactory(MQTopicConnectionFactory mqtopicConnectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer)
{
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, mqtopicConnectionFactory);
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
return factory;
}
Listener:
#JmsListener(
destination = "someTopic",
subscription = "someTopic",
containerFactory = "topicListenerFactory"
)
public void receiveMessage(String msg) {
repository.save(msg);
}
Background:
When you provide a specific queue for IBM MQ to use when you subscribe to a topic it is called a unmanaged subscription because MQ is not managing the underlying queue since you have provided it.
When a queue is not provided it is called a managed subscription, in this case MQ creates a queue for you to hold the published messages.
If it is a non-durable subscription the queue created is a temporary dynamic queue with a name like:
SYSTEM.MANAGED.NDURABLE.<8 hex characters>
If it is a durable subscription the queue created is a permanent dynamic queue with a name like:
SYSTEM.MANAGED.DURABLE.<8 hex characters>
What you have uncovered is that the IBM MQ classes for JMS API only supports managed subscriptions.
Suggestions:
I can suggest two options if you want to use IBM MQ classes for JMS API to receive messages published to a topic on a specific queue:
Have a MQ admin setup an administrative subscription on the queue manager. You can do this a few different ways. The example below would be using a MQSC command.
DEFINE SUB('XYZ') TOPICSTR('SOME/TOPIC') DEST(SOME.QUEUE)
Create a utility app using IBM MQ classes for Java that can open a queue and create a durable subscription with a provided queue, the only purpose of this app would be to subscribe and unsubscribe a provided queue, it would not be used to consume any of the published messages.
For both options above you would have the IBM MQ classes for JMS API application open the queue to consume the published message, for all purposes it would not know or need to know the messages were published to a topic. The message will still contain JMS headers showing the topic string where the message was published, so you can inquire this if required. You could also subscribe multiple topics to a single queue if you like.
I am consuming messages from JMS ActiveMQ using the following code:
<jms:message-driven-channel-adapter
id="helloJMSAdapater" destination="helloJMSQueue" connection-factory="jmsConnectionfactory"
channel="helloChannel" extract-payload="true" />
<integration:channel id="helloChannel" />
My requirement is to consume from here and post it to Kafka outbound adapter. Using the below config:
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter"
kafka-producer-context-ref="kafkaProducerContext"
channel="inputToKafka">
</int-kafka:outbound-channel-adapter>
Here are what i want to achieve:
My queue is a durable topic and dont want to acknowledge the records unless it is successfully published to Kafka. In short, i want to have a transaction behaviour from consuming message from jms to publishing it to Kafka.
I noticed that my messages are immediately dequeued and if processing encounters some exception, i am unable to reprocess it. I dont want that to happen.
Also, when kafka encounters some issue, i want it to be returned to some method so that i can persist the failure message and as said before does not want to acknowledge it.
I am really struggling to get it to work. Can someone please help me out?
You really can have transaction-manager on the <jms:message-driven-channel-adapter> to start TX.
When <int-kafka:outbound-channel-adapter> throws an exception it causes the TX to be ralled back and therefore the message will be requeued.
If you are interested in the persisting errors, there is an error-channel option on the <jms:message-driven-channel-adapter>, but you still have to re-throw exception to let TX to rallback.
To make all that to work you should be sure that there is only single thread from the begging to the end. No <queue> or executor channel in the flow.
Also it isn't clear why do you use so old Apache Kafka still...
I have message in one queue and need to send to other queue(destination) both are active MQs. When destination is down message will be in source Queue. I need to check in continuous intervals whether destination is up r not. if it is up I need to send to destination. I'm facing difficulty in checking the destination availability..,
please help me., Thanks..,
I think in general, this kind of problem is best solved using transactions.
I'm assuming you are working with two different ActiveMQ brokers, which leads to the chance that the destination queue is not available.
In the simplest case, you could accomplish your goal this way:
Start JMS Transaction
Receive message from queue A on broker 1
Do any required logic and/or transformation
Publish message to queue B on broker 2
If successful, commit your JMS transaction
If not, rollback your JMS transaction
Example:
<flow name="simpleExample">
<jms:inbound-endpoint queue="queueA" connector-ref="broker1">
<jms:transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<flow-ref name="doLogic" />
<jms:outbound-endpoint queue="queueB" connector-ref="broker2">
<jms:transaction action="ALWAYS_JOIN" />
</jms:outbound-endpoint>
</flow>
When the rollback occurs, this method will immediately retry. If you want to control how long to wait before trying again, configure the redelivery policy on the ActiveMQ connector for Broker 1.
My consumer, e.g. service activator that is consuming messages coming from ActiveMQ fromChannel should be restarted when exception occurs or ActiveMQ fails. How to do it for the following spring integration context ?
<!-- RECEIVER. message driven adapter -> jmsInChannel -> activator. -->
<si:channel id="fromChannel"/>
<int-jms:message-driven-channel-adapter id="messageDrivenAdapter"
channel="fromChannel" destination="forward" connection-factory="connectionFactory"
max-concurrent-consumers="2" auto-startup="true" acknowledge="auto" extract-payload="false" />
<si:service-activator id ="activator"
input-channel="fromChannel"
ref="messageService"
method="process"/>
<bean id="messageService" class="com.ucware.ucpo.forward.jms.MessageService"/>
My first idea was to use Retry Advice and add to a service but am not sure if this a right solution for unhandled exceptions. I also would like the receiver to restart if the ActiveMQ server is down.
The listener container within the message-driven-channel-adapter will automatically keep trying to reconnect when it loses connectivity to the broker.
If you set `acknowledge="transacted"' the message will be rolled back on an exception and the broker will resubmit it.
A stateful retry advice would allow you to give up and take some other action after some number of retries (but you can also configure that into ActiveMQ itself where it will send the message to a DLQ after some number of delivery attempts).
Reading your post I instantly thought of this video. Which gives a good insight on how to monitor and control SI application through itself.
Additionally you should have a look at ApplicationEvent documentation of SI.
Glueing that all together you could monitor the JMS message adapter with JMX and stop and restart it through sending an ApplicationEvent on issues. Regarding catching exceptions it depends on what Exceptions you actually want to handle. I'd create an errorChannel that receives exceptions being thrown by components and create a new service that restarts these components after receiving errors.
Following your idea leveraging Spring Retry's capabilites in SI.
We are working on a POC to use Spring integration and Rabbit MQ.
We have two modules producer module and consumer module both are runs in different JVMs.
The Producer module listen on a Folder (input folder) as soon as new files arrives, creates a message then push to (incoming.q.in) queue and also move to process folder.
The Consumer module then pickups the messages from the incoming.q.in Queue then process the files them move to complete folder.
Both Producer and Consumer code is working fine but after some ideal item then consumer module is getting disconnecting from Rabbit MQ. We see messages in incoming.q.in queue but the consumer is not processing.
When I logged into Rabbit MQ Admin/Management tool “incoming.q.in” consumer list is empty and the message is “... no consumers ...”.
The consumer code
<int-amqp:inbound-channel-adapter channel="inBoundfile" queue-names="incoming.q.in" connection-factory="connectionFactory"
error-channel="error.in">
</int-amqp:inbound-channel-adapter>
<int:header-enricher input-channel="inBoundfile" output-channel="serviceInbound">
<int:header name="FILEID" expression="payload.fileID" />
</int:header-enricher>
<int:service-activator ref="routerService" method="processFile" input-channel="serviceInbound" output-channel="fileHandler.router.in" />
....
I appreciate your help.
Turn on DEBUG logging on the consumer side; you'll see lots of logging and reconnection attempts if/when a connection is lost.