Using Spring Integration's own JMS Outbound Channel Adapter that is connected to Websphere MQ v7.1:
<!-- plugging xyz channel into a JMS xyzQueue -->
<channel id="xyzChannel"/>
<jms:outbound-channel-adapter channel="xyzChannel"
destination="xyzQueue"/>
<!-- will listen on all packets matched, and forward them to a xyzChannel -->
<beans:bean id="xyzSender" class="com.custom.XyzSender">
<beans:constructor-arg name="messageChannel" ref="xyzChannel"/>
</beans:bean>
In case MQ broker (its channel) goes down for a split of a second, the exception is logged:
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2537;AMQ9558: The remote channel 'XYZ_CHANNEL' is not currently available ('MQRC_CHANNEL_NOT_AVAILABLE').
From MQ docs these are possible reasons for this error:
The channel is currently in stopped state.
The channel has been stopped by a channel exit.
The queue manager has reached its maximum allowable limit for this channel from this client.
The queue manager has reached its maximum allowable limit for this channel.
The queue manager has reached its maximum allowable limit for all channels.
However Spring Integration seems to eat it exception up, and in a split second after the MQ channel is available again, other messages get processed as nothing happened. Of course it results in a message drop, which is not an expected scenario.
What would be a way to handle this exception? There does not seem to be an error channel attribute on jms:outbound-channel-adapter.
Related
I am using ActiveMQ Artemis 2.19.1. I created producer and consumer apps using Spring Boot. I need multiple instances of the consumer to receive all the messages (multicast). I configured a Last Value Queue like this (broker.xml):
<address-settings>
<address-setting match="quote.#">
<max-size-bytes>1000000000</max-size-bytes> <!-- 1GB -->
<address-full-policy>BLOCK</address-full-policy>
<default-last-value-key>symbol</default-last-value-key>
<default-last-value-queue>true</default-last-value-queue>
<default-non-destructive>true</default-non-destructive>
</address-setting>
...
</address-settings>
Sending is like this and appears to work correctly. "symbol" is the VLQ key.
import org.springframework.jms.core.JmsTemplate;
#Service
public class DispatcherService {
#Autowired
JmsTemplate jmsTemplate;
public void sendMessageA(String message) {
jmsTemplate.convertAndSend(jmsQueue, message, m-> {
m.setStringProperty("symbol", "ABC");
return m;
});
}
If Spring Boot applicaiton.properties has:
spring.jms.pub-sub-domain=true
...then all clients receive all messages when published (good). However, the most recent message is not published to new clients when they start and subscribe to the topic.
If instead using:
spring.jms.pub-sub-domain=false
I can see the last message remains in the Last Value Queue (good) and connecting consumers get the last msg. However as messages are published they're distributed round-robin (anycast), not all messages to all consumers.
How can I make sure clients connecting to a LVQ receive the most recent message then all future messages, not just a round-robin distribution of future messages?
EDIT:
Doing this works. Just leave spring.jms.pub-sub-domain=true and set retroactive-message-count greater than the number of symbols that may be encountered otherwise some will not be retained:
<address-setting match="quotes">
<retroactive-message-count>100000</retroactive-message-count>
</address-setting>
<address-setting match="*.*.*.quotes.*.retro">
<default-last-value-key>symbol</default-last-value-key>
</address-setting>
It sounds to me like everything is working as designed. I believe your expectations are being thwarted because you're using pub/sub (i.e. JMS topics).
Let me provide a bit of background. When a JMS client creates a subscription on a topic the broker responds by creating a multicast queue on the address with the same name. The queue is named according to the kind of subscription it is. If it is a non-durable subscription then the queue is named with a UUID. If it is a durable subscription then the queue is named according to the subscription name provided by the client and the client ID (if available). When a message is sent to the address it is put in all the multicast queues bound to that address.
Therefore, when a new non-durable subscription is created a new queue for that subscription is also created which means that the subscriber will receive none of the messages sent to the topic prior to the creation of the subscription. This is the expected behavior for JMS topics (i.e. normal pub/sub semantics). Also, since the queue for a non-durable subscription is only available while the subscriber is connected that means there's no way to enforce LVQ semantics since any message which arrives in the queue will be immediately dispatched to the consumer. In short, LVQ with JMS topics doesn't make a lot of sense.
The behavior changes when you use a JMS queue because the queue is always there to receive messages. Consumers can come and go as they please while the broker enforces LVQ semantics.
One possible solution would be to create a special "initialization" queue where consumers could initially connect to get the latest information and after that they could subscribe to the JMS topic to get the pub/sub semantics you need. You could use a divert to make this transparent for the applications sending the messages so they can continue to just send to the JMS topic. Here's sample configuration:
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core">
...
<diverts>
<divert name="myDivert">
<address>myTopic</address>
<forwarding-address>initQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
</diverts>
...
<addresses>
<address name="myTopic">
<multicast/>
</address>
<address name="initQueue">
<anycast>
<queue name="initQueue" last-value-key="symbol" non-destructive="true" />
</anycast>
</address>
...
</addresses>
</core>
</configuration>
Using this configuration every message send to the JMS topic myTopic will transparently sent to initQueue as well. This queue will keep only the most up-to-date messages since it using last-value semantics. Also, those up-to-date messages will stay in the queue for any subsequent consumer since the queue is non-destructive.
The only difficulty I anticipate here is with Spring which may not provide you with the flexibility to create the initial queue consumer and then create a topic subscriber. If you used the JMS API directly this would be a relatively simple matter.
Another potential solution would be to use retroactive addresses. The main thing to do here would be to ensure the internal ring queues were LVQs. You can do that with the default-last-value-key address-setting. See the documentation details on the match to use.
I just want to ask whether following SI configuration is OK from your point of view....
Let's have following publish subscribe channel with some subscribers...
<int:publish-subscribe-channel id="channelName" ignore-failures="false"/>
and feed it from two JMS message driven adapters:
<jms:message-driven-channel-adapter channel="channelName"
destination="JMSQueue1"
connection-factory="JMSQueue1CF1"
concurrent-consumers="1"
max-concurrent-consumers="10"
error-channel="errorChannel"
acknowledge="transacted"
task-executor="mySimpleTaskExecutor1"/>
<jms:message-driven-channel-adapter channel="channelName"
destination="JMSQueue2"
connection-factory="JMSQueue2CF2"
concurrent-consumers="1"
max-concurrent-consumers="10"
error-channel="errorChannel"
acknowledge="transacted"
task-executor="mySimpleTaskExecutor2"/>
If both of these JMS Inbound channel adapters are going to have same output channel ("channelName"), are they going to interfere their processing somehow?
My guess is that every message from both queues is going to be consumed in the different thread so processing of message from JMSQueue1 won't be waiting on message from JMSQueue2.
True or not true?
There are no issues with having multiple producers on the same channel; the threads won't "interfere" with each other.
It's exactly the same as having concurrency in the message-driven adapter (which you have).
I have a service which receives xml messages via an http inbound adapter and then transforms them into text that becomes the content of an email that gets sent out.
I now need to first insert these messages into a JMS queue and send the acknowledgement back as a 200 ok after the message is inserted into the Q and then carry-on with the rest of the processing.
<int-http:inbound-channel-adapter channel="inputChannel"
id="httpInbound"
auto-startup="true"
request-payload-type="java.lang.String"
path="/message"
supported-methods="POST"
error-channel="logger" >
<int-http:request-mapping consumes="application/xml" />
</int-http:inbound-channel-adapter>
<int:chain id="chain" input-channel="inputChannel" >
<int:service-activator ref="mailTransformerBean" method="transform" />
</int:chain>
The service-activator takes care of the processing to convert the xml into an email.
Before that I need to incorporate a JMS Queue into which the received messsages will be inserted and then the acknowledgement is sent back. This is so as to retain the messages and retry in case of a failure of the service.
I would like to set this up as a Transaction with the JMS queue as a endpoint.
How do i approach this?
If you are seeking something like a in-process persistence storage, take a look, please, into the SubscribableJmsChannel :
The channel in the above example will behave much like a normal <channel/> element from the main Spring Integration namespace. It can be referenced by both "input-channel" and "output-channel" attributes of any endpoint. The difference is that this channel is backed by a JMS Queue instance named "exampleQueue".
My consumer, e.g. service activator that is consuming messages coming from ActiveMQ fromChannel should be restarted when exception occurs or ActiveMQ fails. How to do it for the following spring integration context ?
<!-- RECEIVER. message driven adapter -> jmsInChannel -> activator. -->
<si:channel id="fromChannel"/>
<int-jms:message-driven-channel-adapter id="messageDrivenAdapter"
channel="fromChannel" destination="forward" connection-factory="connectionFactory"
max-concurrent-consumers="2" auto-startup="true" acknowledge="auto" extract-payload="false" />
<si:service-activator id ="activator"
input-channel="fromChannel"
ref="messageService"
method="process"/>
<bean id="messageService" class="com.ucware.ucpo.forward.jms.MessageService"/>
My first idea was to use Retry Advice and add to a service but am not sure if this a right solution for unhandled exceptions. I also would like the receiver to restart if the ActiveMQ server is down.
The listener container within the message-driven-channel-adapter will automatically keep trying to reconnect when it loses connectivity to the broker.
If you set `acknowledge="transacted"' the message will be rolled back on an exception and the broker will resubmit it.
A stateful retry advice would allow you to give up and take some other action after some number of retries (but you can also configure that into ActiveMQ itself where it will send the message to a DLQ after some number of delivery attempts).
Reading your post I instantly thought of this video. Which gives a good insight on how to monitor and control SI application through itself.
Additionally you should have a look at ApplicationEvent documentation of SI.
Glueing that all together you could monitor the JMS message adapter with JMX and stop and restart it through sending an ApplicationEvent on issues. Regarding catching exceptions it depends on what Exceptions you actually want to handle. I'd create an errorChannel that receives exceptions being thrown by components and create a new service that restarts these components after receiving errors.
Following your idea leveraging Spring Retry's capabilites in SI.
My setup is Spring 3 JMS, MVC + Websphere MQ + Websphere 7
<!-- this is the Message Driven POJO (MDP) -->
<bean id="messageListener" class="com.SomeListener" />
<!-- and this is the message listener container -->
<bean id="jmsContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="xxxCF" />
<property name="destination" ref="someQueue" />
<property name="messageListener" ref="messageListener" />
</bean>
When I start up the server, the listener seems to start correctly since it receives the messages that are on the queue as I put them.
However, once I run any simple controller/action that doesn't even have anything to do with JMS it gives me the message below over and over...
DefaultMessag W org.springframework.jms.listener.DefaultMessageListenerContainer handleListenerSetupFailure Setup of JMS message listener invoker failed for destination 'queue:///ABCDEF.EFF.OUT?persistence=-1' - trying to recover. Cause: MQJMS2008: failed to open MQ queue ''.; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2042'.
DefaultMessag I org.springframework.jms.listener.DefaultMessageListenerContainer refreshConnectionUntilSuccessful Successfully refreshed JMS Connection
ConnectionEve W J2CA0206W: A connection error occurred. To help determine the problem, enable the Diagnose Connection Usage option on the Connection Factory or Data Source.
ConnectionEve A J2CA0056I: The Connection Manager received a fatal connection error from the Resource Adapter for resource JMS$XXXQCF$JMSManagedConnection#2. The exception is: javax.jms.JMSException: MQJMS2008: failed to open MQ queue ''.
ConnectionEve W J2CA0206W: A connection error occurred. To help determine the problem, enable the Diagnose Connection Usage option on the Connection Factory or Data Source.
ConnectionEve A J2CA0056I: The Connection Manager received a fatal connection error from the Resource Adapter for resource jms/XXXQCF. The exception is: javax.jms.JMSException: MQJMS2008: failed to open MQ queue ''.
The original listener seems to be still running correctly...but I think the controller is somehow triggering off another connection?
Does anyone know what I should check for or what might cause this issue?
thanks
The 2042 means "Object in use". Since there is no concept of exclusive use of queues for message producers, then one of your consumers is locking the queue.
This behavior is controlled by the queue definition's DEFSOPT attribute. This is at the queue manager itself and not in the managed object definitions or your factory options. From the command line while signed on as mqm (or the platform equivalent if the QMgr is on Windows, iSeries, z/OS, etc.) you would need to start runmqsc and issue the following commands to verify and then fix the problem. In my example, the QMgr is PLUTO and the example queue is SYSTEM.DEFAULT.LOCAL.QUEUE.
/home/mqm: runmqsc PLUTO
5724-H72 (C) Copyright IBM Corp. 1994, 2009. ALL RIGHTS RESERVED.
Starting MQSC for queue manager PLUTO.
dis q(system.default.local.queue) defsopt
1 : dis q(system.default.local.queue) defsopt
AMQ8409: Display Queue details.
QUEUE(SYSTEM.DEFAULT.LOCAL.QUEUE) TYPE(QLOCAL)
DEFSOPT(EXCL)
alter ql(system.default.local.queue) defsopt(shared)
2 : alter ql(system.default.local.queue) defsopt(shared)
AMQ8008: WebSphere MQ queue changed.
dis q(system.default.local.queue) defsopt
3 : dis q(system.default.local.queue) defsopt
AMQ8409: Display Queue details.
QUEUE(SYSTEM.DEFAULT.LOCAL.QUEUE) TYPE(QLOCAL)
DEFSOPT(SHARED)
If you display the queue and find that it is already set for DEFSOPT(SHARED) then something must be specifying exclusive use of the queue through the API. That typically means a C or base Java program since these non-JMS APIs have access to low-level WMQ functionality. Those can be a little trickier to diagnose and I usually use a trace or the SupportPac MA0W exit to display the API calls and options used. If this is the case, I'd want to know more about what is meant by "simple controller/action" as noted in the original post.
Finally, if the queue that you are accessing is a remote queue then it will resolve to a transmit queue. The channel will always set a transmit queue to GET(INHIBITED) and acquire an exclusive lock on it. This is consistent with WMQ functionality in that an application can only GET messages from a local queue.