How to poll an AMQP queue using Spring Integration - spring

I have the use case that I need to wait 2 hours before consuming messages from an AMQP (we use Rabbit) queue.
EDIT: To clarify my use case... I need each individual message to wait 2 hours before being read. E.g. Message 1 arrives at 10am and Message 2 arrives at 10:15. I need Message 1 to be read at 12p and Message 2 to be read at 12:15p.
We are using Spring Integration 3.x.
The int-amqp:inbound-channel-adapter is message driven and doesn't have a polling option from what I can find.
A couple things I've thought of:
Set auto-startup to false and manually start the inbound channel adapter using a quartz job.
Create my own custom SimpleMessageListenerContainer that is based on polling (not sure how easy this would be)
Configure a "delay queue" in rabbitmq using this method: How to create a delayed queue in RabbitMQ?
EDIT: add 4th option: Use delayer to delay each message for 2 hours: http://docs.spring.io/spring-integration/docs/3.0.2.RELEASE/reference/html/messaging-endpoints-chapter.html#delayer
Any suggestions?

We don't currently have a polling inbound adapter. #1 is easy. For #2, the simplest would be to use a RabbitTemplate and invoke receive() from an inbound-channel-adapter in a POJO.
I would go with #1; you don't need quartz, you can use a simple Spring scheduled task and a control bus to start the adapter.

Another trick is about to use PollableAmqpChannel:
<int-amqp:channel id="myQueueName" message-driven="false"/>
and provide the <poller> for the subscriber to that channel.
There is no reason to send messages to that channel (because you will poll messages from Rabbit Queue) and, right, it looks like anti-pattern, but it is a hook how to avoid any workarounds with direct RabbitTemplate usage via SpEL.
UPDATE
<delayer> can help you, but it depends of your requirements. If you don't want to poll messages from RabbitMQ, you should use the workaround above. But if you just don't want to process message until some time is elapsed, you can just 'delay' it for that time.
Don't forget to add persistent message-store to avoid losing messages during that period and unexpected application failure.

FYI, how I solved the issue. (Used solution #3).
<rabbit:queue name="delayQueue" durable="true">
<rabbit:queue-arguments>
<entry key="x-message-ttl">
<value type="java.lang.Long">7200000</
</entry>
<entry key="x-dead-letter-exchange" value="finalDestinationTopic"/>
<entry key="x-dead-letter-routing-key" value="finalDestinationQueue"/>
</rabbit:queue-arguments>
</rabbit:queue>
<rabbit:topic-exchange name="finalDestinationTopic">
<rabbit:bindings>
<rabbit:binding queue="finalDestinationQueue" pattern="finalDestinationQueue"/>
</rabbit:bindings>
</rabbit:topic-exchange>

Related

Spring Integration - JMS to Kafka message transfer - End to End Transaction

I am consuming messages from JMS ActiveMQ using the following code:
<jms:message-driven-channel-adapter
id="helloJMSAdapater" destination="helloJMSQueue" connection-factory="jmsConnectionfactory"
channel="helloChannel" extract-payload="true" />
<integration:channel id="helloChannel" />
My requirement is to consume from here and post it to Kafka outbound adapter. Using the below config:
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter"
kafka-producer-context-ref="kafkaProducerContext"
channel="inputToKafka">
</int-kafka:outbound-channel-adapter>
Here are what i want to achieve:
My queue is a durable topic and dont want to acknowledge the records unless it is successfully published to Kafka. In short, i want to have a transaction behaviour from consuming message from jms to publishing it to Kafka.
I noticed that my messages are immediately dequeued and if processing encounters some exception, i am unable to reprocess it. I dont want that to happen.
Also, when kafka encounters some issue, i want it to be returned to some method so that i can persist the failure message and as said before does not want to acknowledge it.
I am really struggling to get it to work. Can someone please help me out?
You really can have transaction-manager on the <jms:message-driven-channel-adapter> to start TX.
When <int-kafka:outbound-channel-adapter> throws an exception it causes the TX to be ralled back and therefore the message will be requeued.
If you are interested in the persisting errors, there is an error-channel option on the <jms:message-driven-channel-adapter>, but you still have to re-throw exception to let TX to rallback.
To make all that to work you should be sure that there is only single thread from the begging to the end. No <queue> or executor channel in the flow.
Also it isn't clear why do you use so old Apache Kafka still...

How to check whether Active MQ is up in continuous intervals in mule esb?

I have message in one queue and need to send to other queue(destination) both are active MQs. When destination is down message will be in source Queue. I need to check in continuous intervals whether destination is up r not. if it is up I need to send to destination. I'm facing difficulty in checking the destination availability..,
please help me., Thanks..,
I think in general, this kind of problem is best solved using transactions.
I'm assuming you are working with two different ActiveMQ brokers, which leads to the chance that the destination queue is not available.
In the simplest case, you could accomplish your goal this way:
Start JMS Transaction
Receive message from queue A on broker 1
Do any required logic and/or transformation
Publish message to queue B on broker 2
If successful, commit your JMS transaction
If not, rollback your JMS transaction
Example:
<flow name="simpleExample">
<jms:inbound-endpoint queue="queueA" connector-ref="broker1">
<jms:transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<flow-ref name="doLogic" />
<jms:outbound-endpoint queue="queueB" connector-ref="broker2">
<jms:transaction action="ALWAYS_JOIN" />
</jms:outbound-endpoint>
</flow>
When the rollback occurs, this method will immediately retry. If you want to control how long to wait before trying again, configure the redelivery policy on the ActiveMQ connector for Broker 1.

Spring Mqtt - Publish messages to multiple topics programmatically

How can I publish messages with different topics programmatically?
<mqtt:outbound-channel-adapter id="mqttOut"
auto-startup="true"
client-id="foo"
url="tcp://localhost:1883"
client-factory="clientFactory"
default-qos="0"
default-retained="false"
default-topic="bar"
async="true"
async-events="true" />
I tried Spring integration MQTT publish & subscribe to multiple topics, but were not able to configure.
Also tried with MqttPahoMessageHandlerAdapter which has a publish() but protected.
Going with org.eclipse.paho.client.mqttv3.MqttAsyncClient and org.eclipse.paho.client.mqttv3.MqttCallback is very easy. But I would like to stick with spring all the way.
Appreciate if somebody can points me to a correct direction.
Declare a <publish-subscribe-channel id="toMqtt" />; set it as the channel attribute on each outbound channel adapter; the message will be sent to each adapter.
You can do that with Spring Integration anyway! Having a lot of EIP components implementation and Spring power on board (injection, SpEL etc,), plus switching on a bit of imagination, we can reach any end-application requirements even without any Java code.
So, <mqtt:outbound-channel-adapter> allows determine topic at runtime. Instead of default-topic you should supply MqttHeaders.TOPIC MessageHeader.
So, if you have a requirement to send the same message to several topics, you just build a copy of that message for each topic. The <splitter> can help us:
<int:splitter input-channel="enricheMessage" output-channel="sendMessage" apply-sequence="false">
<int-groovy:script>
['topic1', 'topic2', 'topic3'].collect {
org.springframework.integration.support.MessageBuilder.withPayload(payload)
.copyHeaders(headers)
.setHeader(org.springframework.integration.mqtt.support.MqttHeaders.TOPIC, it)
.build()
}
</int-groovy:script>
</int:splitter>
sendMessage can be ExecutorChannel to achieve the parallel publishing.
UPDATE
You can achieve the same iteration and message enrichment logic with similar Java code using ref and method on <splitter>.
Of course, we can do that even with SpEL , but it will look a bit complex with Collection Projection.

Spring integration service-interface gateway reply channel as shared Pub/Sub

This is similar to Intermittent BridgeHandler & PublishSubscribeChannel call when gateways' reply channel is pub/sub but the scenario is different in that the reply-channel is not getting "lost". The question is what is the best resolution for my scenario.
I am using Spring integration to launch Spring batch jobs. I have a number of input routes e.g. file polling and http requests. These all route to a batch-int Job Launching Gateway. The referenced Job Launcher has a task executor so job launches are asynchronous. This gateway replies on a specified channel.
<int:gateway service-interface="c.c.c.etl.gateway.JobSubmissionService" id="jobSubmissionService" default-request-channel="jobLauchInputChannel" default-reply-channel="jobLaunchReplyChannel">
</int:gateway>
<int:bridge id="filePollerBridge" input-channel="filePollerOutputChannel" output-channel="jobLauchInputChannel" />
<batch-int:job-launching-gateway request-channel="jobLauchInputChannel" reply-channel="jobLaunchReplyChannel" job-launcher="jobLauncher">
</batch-int:job-launching-gateway>
<int:publish-subscribe-channel id="jobLaunchReplyChannel" />
<int:bridge id="jobLaunchReplyChannelBridge" input-channel="jobLaunchReplyChannel" output-channel="loggingChannel">
</int:bridge>
This specified channel 'jobLaunchReplyChannel' is pub/sub and has a logger listening to it. This channel is also used as the reply channel for a service-interface gateway.
The issue I am having is that when jobs are requested via sources that are not the gateway (e.g. the poller) the Bridge that is added by gateway throws an exception because no reply channel is set on replies.
I have resolved this by adding a header to messages sent via the gateway and filtering out messages only with this header to a new 'gatewayReplyChannel'.
<int:gateway service-interface="c.c.c.etl.gateway.JobSubmissionService" id="jobSubmissionService" default-request-channel="httpJobRequestInputChannel" default-reply-channel="jobSubmissionServiceReplyChannel">
<int:default-header name="isJobSubmissionServiceMessage" value="true" />
</int:gateway>
<int:channel id="jobSubmissionServiceReplyChannel"></int:channel>
<int:filter id="jobSubmissionServiceReplyChannelFilter" input-channel="jobLaunchReplyChannel" expression="headers.get('isJobSubmissionServiceMessage') == null ? false : headers.get('isJobSubmissionServiceMessage')" output-channel="jobSubmissionServiceReplyChannel"
throw-exception-on-rejection="false" />
Is there a better way to do this?
Eh.. That's an interesting issue. The main cause is because MessagingGatewaySupport creates replyMessageCorrelator endpoint for an internal BridgeHandler.
And we really have that strange behaviour when we send message to the reply-channel directly. That BridgeHandler tries to send message to the replyChannel from headers.
And we really can't do anything to prevent that logic. And can't protect that explicit reply-channel from direct messages.
I think your solution is correct. Another way to overcome that: add replyChannel header from the start of another flow (file polling in your case) just using something like this:
<header-enricher>
<reply-channel ref="nullChannel"/>
</header-enricher>
Feel free to raise a JIRA issue on the matter and we'll take a look what we can do. At least we can document these specifics.

How to restart the message consumer in Spring Integration?

My consumer, e.g. service activator that is consuming messages coming from ActiveMQ fromChannel should be restarted when exception occurs or ActiveMQ fails. How to do it for the following spring integration context ?
<!-- RECEIVER. message driven adapter -> jmsInChannel -> activator. -->
<si:channel id="fromChannel"/>
<int-jms:message-driven-channel-adapter id="messageDrivenAdapter"
channel="fromChannel" destination="forward" connection-factory="connectionFactory"
max-concurrent-consumers="2" auto-startup="true" acknowledge="auto" extract-payload="false" />
<si:service-activator id ="activator"
input-channel="fromChannel"
ref="messageService"
method="process"/>
<bean id="messageService" class="com.ucware.ucpo.forward.jms.MessageService"/>
My first idea was to use Retry Advice and add to a service but am not sure if this a right solution for unhandled exceptions. I also would like the receiver to restart if the ActiveMQ server is down.
The listener container within the message-driven-channel-adapter will automatically keep trying to reconnect when it loses connectivity to the broker.
If you set `acknowledge="transacted"' the message will be rolled back on an exception and the broker will resubmit it.
A stateful retry advice would allow you to give up and take some other action after some number of retries (but you can also configure that into ActiveMQ itself where it will send the message to a DLQ after some number of delivery attempts).
Reading your post I instantly thought of this video. Which gives a good insight on how to monitor and control SI application through itself.
Additionally you should have a look at ApplicationEvent documentation of SI.
Glueing that all together you could monitor the JMS message adapter with JMX and stop and restart it through sending an ApplicationEvent on issues. Regarding catching exceptions it depends on what Exceptions you actually want to handle. I'd create an errorChannel that receives exceptions being thrown by components and create a new service that restarts these components after receiving errors.
Following your idea leveraging Spring Retry's capabilites in SI.

Resources