How to use multiple Queues in a single gateway - spring

I have multiple Queues and I want to use a single Gateway. I have default channel. Is there a simple way to define multiple source of Queues.
In this case "Simple" is defined as simplicity of runtime complexity than configuration.

Looks like you need RecipientListRouter:
<int:recipient-list-router input-channel="routingChannel">
<int:recipient channel="queue1"/>
<int:recipient channel="queue2"/>
</int:recipient-list-router>

Related

Spring AMQP and concurrency

I have a “listener-container” defined like this:
<listener-container concurrency="1" connection-factory="connectionFactory" prefetch="10"
message-converter="jsonMessageConverter"
error-handler="clientErrorHandler"
mismatched-queues-fatal="true"
xmlns="http://www.springframework.org/schema/rabbit">
<listener ref="clientHandler" method="handleMessage" queue-names="#{marketDataBroadcastQueue.name}" />
</listener-container>
I want to process the messages in sequential order, so I need to set concurrency to 1.
But the bean “clientHandler” has more than one “handleMessage” methods (with diferent java classes as parameters). I can see in the application logs that messages are not processed one by one. I have several messages processed in parallel. Can it be due to having multiple methods with the same name that processes those messages?
Thanks!

Claim Check Out when using multiple threads in Spring Integration

I have a huge xml that come as an input payload to my Spring integration flow. So I am using claim check in transformer instead of header enricher to retain my payload. I am using an in-memory message store.
Later on in my SI flow, I have a splitter that splits the payload into multiple threads and each thread will invoke different channel based on one of the attribute payload. I am using a router for achieve this. Each flow or each thread uses a claim check out transformer to retrieve the initial payload then us it for building the required response. Each thread will produce a response and I don't have to aggregate them. So I will have multiple responses coming out from my flow which will then be dropped into a queue.
I cannot remove the message during the check out as other thread will also try to check out the same message. What is the best way to remove the message from the message store?
Sample configuration
`<int:chain input-channel="myInputChannel"
output-channel="myOutputchannel">
<int:claim-check-in />
<int:header-enricher>
<int:header name="myClaimCheckID" expression="payload"/>
</int:header-enricher>
</int:chain>`
all the other components in the flow are invoked before the splitter
<int:splitter input-channel="mySplitterChannel" output-channel="myRouterChannel" expression="mySplitExpression">
</int:splitter>
`<int:router input-channel="myRouterChannel" expression="routerExpression"
resolution-required="true">
<int:mapping value="A" channel="aChannel" />
<int:mapping value="B" channel="bChannel" />
<int:mapping value="C" channel="cChannel" />
</int:router>`
Each channel has a claim check out transformer for the initial payload. So how do I make sure the message is removed after all the threads have been processed?
When you know you are done with the message you can simply invoke the message store's remove() method. You could use a service activator with
... expression="#store.remove(headers['myClaimCheckID'])" ...
However, if you are using an in-memory message store there is really no point in using the claim check pattern.
If you simply promote the payload to a header, it will use no more memory than putting it in a store.
Even if it ends up in multiple messages on multiple threads, it makes no difference since they'll all be pointing to the same object on the heap.

Does Spring Integration provide something like a metric-channel-interceptor that can be logged within a flow

I'm looking for an easy way on some of my flows to be able to log when some "event" occurs.
In my simple case an "event" might be whenever any message flows down a channel, or whenever a certain # of messages flow down a channel, I'd like to print out some info to a log file.
I know there currently is a logging-channel-adapter but in the case just described I'd need to be able to tailor my own log message and I'd also need to have some sort of counter or metrics keeping track of things (so the expression on the adapter wouldn't suffice since that grants access to the payload but not info about the channel or flow).
I'm aware that Spring Integration already exposes a lot of metrics to JMX via ManagedResources and MetricType and ManagedMetrics.
I've also watched Russell's "Managing and Monitoring Spring Integration Applications" YouTube video several times: https://www.youtube.com/watch?v=TetfR7ULnA8
and I realize that Spring Integration component metrics can be polled via jmx-attribute-polling-channel-adapter
There are certainly many ways to get what I'm after.
A few examples:
ServiceAdapter that has a counter in it that also has a reference to a logger
Hook into the advice-chain of a poller
Poll JMX via jmx-attribute-polling-channel-adapter
It might be useful however to offer a few components that users could put into the middle of a flow that could provide some basic functionality to easily satisfy the use-case I described.
Sample flow might look like:
inbound-channel-adapter -> metric-logging-channel-interceptor -> componentY -> outbound-channel-adapter
Very high-level such a component might look like a hybrid of the logging-channel-adapter and a ChannelInterceptor with a few additional fields:
<int:metric-logging-channel-interceptor>
id=""
order=""
phase=""
auto-startup=""
ref=""
method=""
channel=""
outchannel=""
log-trigger-expression="(SendCount % 10) = 0"
level=""
logger-name=""
log-full-message=""
expression=""
/>
Internally the class implementing that would need to keep a few basic stats, I think the ones exposed on messageChannel would be a good (i.e. SendCount, MaxSendDuration, etc).
The log-trigger-expression and expression attributes would need access to the internal counters as well.
Please let me know if there is something that already does what I'm describing or if I'm overcomplicating this. If it does not currently exist though I think that being able to quickly drop a component into a flow without having to write a custom ServiceActivator just for logging purposes provides benefit.
Interesting question. You can already do something similar with a selective wire-tap...
<si:publish-subscribe-channel id="seconds">
<si:interceptors>
<si:wire-tap channel="thresholdLogger" selector="selector" />
</si:interceptors>
</si:publish-subscribe-channel>
<bean id="selector" class="org.springframework.integration.filter.ExpressionEvaluatingSelector">
<constructor-arg
value="#mbeanServer.getAttribute('org.springframework.integration:type=MessageChannel,name=seconds', 'SendCount') > 5" />
</bean>
<si:logging-channel-adapter id="thresholdLogger" />
There are a couple of things going on here...
The stats are actually held in the MBean for the channel, not the channel itself so the expression has to get the value via the MBean server.
Right now, the wire-tap doesn't support selector-expression, just selector so I had to use a reference to an expression evaluating selector. It would be a useful improvement to support selector-expression directly.
Even though the selector in this example acts on the stats for the tapped channel, it can actually reference any MBean.
I can see some potential improvements here.
Support selector-expression.
Maintain the stats in the channel itself instead of the MBean so we can just use #channelName.sendCount > 5.
Feel free to open JIRA 'improvement' issue(s).
Hope that helps.

Any concept of a global variable in streams in spring xd?

Scenario: A stream definition in spring xd has the following structure:
jms | filter | transform | hdfs
In the filter module, I fire a query to a database to verify if the current message is applicable for further processing.
When the condition is met, the message passes on to the transform module.
In the transform module, I would like to have access to the query results from the filter module.
Currently, I end up having to fire a query once more inside the transform to access the same result set.
Is there any form of a global variable that can apply during the lifetime of a message passing from source to sink across different modules? This could help reduce latency of reading from database.
If this isn't possible, what would be a recommended alternative?
You typically would use a transformer for this, or a header-enricher, to set a message header with the query result; use that header in the filter, and the header will be available for downstream modules, including your transformer.
<int:chain input-channel="input" output-channel="output">
<int:header-enricher..../>
<int:filter ... />
</int:chain>
This (passing arbitrary headers) currently only works (out of the box) with the rabbit (and local) transport, or if direct binding is enabled.
When using the redis transport, you have to configure the bus to add your header to those it passes.

Spring Integration - Concurrent Service Activators

I have a queue channel, and a service activator with a poller which reads from that queue. I'd like to have configuration to say "I want 50 threads to poll that queue, and each time you poll and get a message back, on this thread, invoke the service the service-activator points to."
The service has no #Async annotations, but is stateless and safe to run in a concurrent fashion.
Will the below do that? Are there other preferred ways of achieving this?
<int:channel id="titles">
<int:queue/>
</int:channel>
<int:service-activator output-channel="resolvedIds" ref="searchService" method="searchOnTitle" input-channel="titles">
<int:poller fixed-delay="100" time-unit="MILLISECONDS" task-executor="taskExecutor"></int:poller>
</int:service-activator>
<task:executor id="taskExecutor" pool-size="50" keep-alive="120" />
Yes I think it does what you want. Once you introduce a QueueChannel the interaction becomes async - you don't need #Async. If you don't explicitly set up a poller it will use the default poller.
What you have outlined is the best way to achieve it. You might also consider putting a limit on the queue size - so that in case there is a lag in keeping up with the producer it doesn't lead to out of memory issue. If a size is specified then the send calls on the channel will block - acting as a throttle.
The configuration you have will work as you expect. The only issue is that once you start creating executors and pollers for each end point it becomes difficult to figure out the optimal configuration for the entire application. It is ok to do this kind of optimization for a few specific steps - but not for all the endpoints (nothing in you questions suggests that you are doing it, just thought that I will raise it anyway.

Resources