Spring integration aggregator time expire - issue - spring

Below code is accepting 2 messages, before proceeding to outbound channel.
<bean id="timeout"
class="org.springframework.integration.aggregator.TimeoutCountSequenceSizeReleaseStrategy">
<constructor-arg name="threshold" value="2" />
<constructor-arg name="timeout" value="7000" />
</bean>
<int:aggregator ref="updateCreate" input-channel="filteredAIPOutput"
method="handleMessage" release-strategy="releaseStrategyBean" release-strategy-method="timeout">
</int:aggregator>
My use case is to collate all the message for 10 min and send it to outbound channel. Not the based on the count of messages as shown above.
To implement this time based functionality, used below code:
<int:aggregator ref="updateCreate" input-channel="filteredAIPOutput"
method="handleMessage"
output-channel="outputappendFilenameinHeader" >
</int:aggregator>
<bean id="updateCreate" class="helper.UpdateCreateHelper"/>
I passed 10 messages, PojoDateStrategyHelper canRelease method invoked 10 times.
Tried to implement PojoDateStrategyHelper, with time difference logic, it's working as expected. After 10 min UpdateCreateHelper class is called, but it received only 1 message(last message). Remaining 9 messages not seen anywhere. Am i doing anything wrong here ? Messages are not collating.
I suspect there should be something inbuild with in SI, which can achieve this, if i pass 10 min as parameter, once it expires the 10 min time, it should pass on all the messages to outbound channel.
This is my UpdateCreateHelper.java code :
public Message<?> handleMessage(List<Message<?>> flights){
LOGGER.debug("orderItems list ::"+flights.size()); // this is always printing 1
MessageBuilder<?> messageWithHeader = MessageBuilder.withPayload(flights.get(0).getPayload().toString());
messageWithHeader.setHeader("ftp_filename", "");
return messageWithHeader.build();
}
#CorrelationStrategy
public String correlateBy(#Header("id") String id) {
return id;
}
#ReleaseStrategy
public boolean canRelease(List<Message<?>> flights) {
LOGGER.debug("inside canRelease ::"+flights.size()); // This is called for each and every message
return compareTime(date.getTime(), new Date().getTime());
}
I am new to SI (v3.x), i searched a lot for time bound related aggregator, couldn't find any useful source, Please suggest.
thanks!

Turn on DEBUG logging to see why you only see one message.
I suspect there should be something inbuilt with in SI, which can achieve this, ...
Prior to version 4.0 (and, by default, after), the aggregator is a completely passive component; the release strategy is only consulted when a new message arrives.
4.0 added group timeout capabilities whereby partial groups can be released (or discarded) after a timeout.
However, with any version, you can configure a MessageGroupStoreReaper to release partially complete groups after some timeout. See the documentation.

private String correlationId = date.toString();
#CorrelationStrategy
public String correlateBy(Message<?> message) {
**// Return the correlation ID which is the timestamp the current window started (all messages should have the same correlation id)**
return "same";
}
Earlier i was returning the Header Id, which is different from Message to Message. I hope this solution could help some one. I wasted almost 2 days by ignore such a small concept.

Related

Spring RabbitMQ Retry policy only on a specific listener

I would like to have a Retry Policy only on a specific listener that listen to a specific queue (DLQ in the specific case).
#RabbitListener(queues = "my_queue_dlq", concurrency = "5")
public void listenDLQ(Message dlqMessage) {
// here implement logic for resend message to original queue (my_queue) for n times with a certain interval, and after n times... push to the Parking Lot Queue
}
BUT if I am not misunderstood when I specify a Retry Policy (for ex. on the application.yaml) all #RabbitListeners use it.
I suppose the only way would be to create a dedicated container factory, but this last one would be identical to the default one with ONLY one more Retry policy ... and it doesn't seem to me like the best to do so.
Something like that :
#RabbitListener(containerFactory = "container-factory-with-retrypolicy", queues = "myDLQ", concurrency = "5")
public void listenDLQ(Message dlqMessage) {
// here implement logic for resend message to original queues for n times with a certain interval
}
Do you see alternatives ?
Thank you in advance.
The ListenerContainer instances are registered to the RabbitListenerEndpointRegistry. You can obtain a desired one by the #RabbitListener(id) value. There you can get access to the MessageListener (casting to the AbstractAdaptableMessageListener) and set its retryTemplate property.
Or you can implement a ContainerCustomizer<AbstractMessageListenerContainer>, check its getListenerId() and do the same manipulation against its getMessageListener().

Pick next message after previous fully processed

I'm stucked with that kind of a problem. I use kafka as transport between services. Tried to draw sequence diagram
First of all planning service get main task and handling it, planning service pass it to few services then. My main problem is: I musn't pick another main task, until f.e. second service send result to kafka and planning service will process the result.
My main listener have this structure
#KafkaListener(
containerFactory = "genFactory",
topics = "${main}")
public void listenStartGeneratorTopic( GeneratorMessage message, Acknowledgment acknowledgment){
//do some logic
//THEN send message to first service, and then in that listener new task sends to second
sendTaskToQueue(task);
acknowledgment.acknowledge();
log.info("All done in method");
}
As I understood, I need aknowledge() after all my logic with result from second service will be done. So I tried to add boolean flag in CompletableFuture, setting it in true when my planning service get response from second service. And do blocking get() in main listener to continue after.
private CompletableFuture<Boolean> isMessageProcessed = new CompletableFuture<>();
#KafkaListener(topics = "${report}")
public void listenReport(ReportMessage reportMessage) {
isMessageProcessed = CompletableFuture.completedFuture(true);
}
}
#KafkaListener(
containerFactory = "genFactory",
topics = "${main}")
public void listenStartGeneratorTopic( GeneratorMessage message, Acknowledgment acknowledgment){
//do some logic
//THEN send message to first service, and then in that listener new task sends to second
sendTaskToQueue(task);
isMessageProcessed.join();
log.info("message is ready for commit");
acknowledgment.acknowledge();
}
That's looks strange enough and that idea doesn't bring me result.
So, can you give me advice, what can I do in that situation?
Why not using 6 topics? I believe this is better separation of duties and might allow you better scale,
Guess I would check KStream as well in your case...
My idea goes like this:
PLANNING SERVICE read from topic1.start do work send to topic2 ,
FIRST SERVICE read from topic2 do work and send to topic3
PLANNING SERVICE (another instance) read from topic3 do work and write to topic4
SECOND SERVICE reads topic4 do work send to topic5
PLANNING SERVICE (another instance) read from topic5 and write to topic6.done

How to avoid WARN logs for conditional #StreamListener, if condition is not met?

With condition on StreamListener annotation, if this condition is not met DispatchingStreamListenerMessageHandler is logging WARN message with text:
Cannot find a #StreamListener matching for message with id: [some_id]
Example, imagine we have 3 microservices:
AnimalService - producer application, that is going to produce Dog and Cat messages.
DogService - consumer application, to consume only Dog messages.
CatService - consumer application, to consume only Cat messages.
Animal application is sending a message and includes header parameter type:
public void handleEvent(Animal animal) {
MessageBuilder<Animal> messageBuilder = MessageBuilder.withPayload(animal)
.setHeader("type", animal.getType());
bindings.itemEventOutput().send(messageBuilder.build());
}
Both DogService and CatService are going to consume this messages. Apparently DogService want to consume only "Dog" messages and CatService only "Cat" messages.
DogService will consume like this:
#StreamListener(target = "animal_events", condition = "headers['type']=='DOG'")
public void handleDogEvents(Message<String> message) {
//important dog related logic
}
CatService will consume like this:
#StreamListener(target = "animal_events", condition = "headers['type']=='CAT'")
public void handleCatEvents(Message<String> message) {
//important cat related logic
}
Because DogService is not handling Cat related messages and vice versa each service will have in a logs WARN message like this:
Cannot find a #StreamListener matching for message with id: [some_id]
I found two solution how to avoid this, but they are probably not the best one.
create in DogService another #StreamListener that will capture Cat events and do any logic there, just log a debug message
Change log level for org.springframework.cloud.stream.binding package to ERROR, but this could lead to missing some important WARN messages in logs.
I'm using spring-cloud-stream-3.0.3.
Is there any other better option (configuration property)? Or there is no other option rather refactor my services ? Thanks.
Annotation programming model for s-c-stream is all but deprecated. For the past year we've been promoting functional programming model which provides for s more robust routing mechanism.
I also published a post with more details here. Hope that helps.
If you use logback as your log framework, you can custom a log filter named ContentLogFilter extends Filter. You need override decide method. In this method, you just add one line of code, that is 'return event.getMessage().contains("Cannot find a #StreamListener matching for message") ? FilterRelay.DENY : FilterRelay.NEUTRAL;'. Then, it means the log filter will discard the log message if log message contains target log message. Otherwise, log filter will jump current filter and execute next filter.
public class ContentLogFilter extends Filter<ILoggingEvent> {
#Override
public FilterReply decide(ILoggingEvent event) {
return event.getMessage().contains("Cannot find a #StreamListener
matching for message") ? FilterRelay.DENY : FilterRelay.NEUTRAL;
}
}
Then, you need add your custom log filter in each appender of logback.xml config file.
<filter class="com.filter.ContentLogFilter" />
Attention please, your custom log filter should be placed on front of other filter.
I answer question for the first time in the platform. I just met the question and cost two days deal it. So I hope the answer can help more developer. That's all,Thanks!

Consuming from Camel queue every x minutes

Attempting to implement a way to time my consumer to receive messages from a queue every 30 minutes or so.
For context, I have 20 messages in my error queue until x minutes have passed, then my route consumes all messages on queue and proceeds to 'sleep' until another 30 minutes has passed.
Not sure the best way to implement this, I've tried spring #Scheduled, camel timer, etc and none of it is doing what I'm hoping for. I've been trying to get this to work with route policy but no dice in the correct functionality. It just seems to immediately consume from queue.
Is route policy the correct path or is there something else to use?
The route that reads from the queue will always read any message as quickly as it can.
One thing you could do is start / stop or suspend the route that consumes the messages, so have this sort of set up:
route 1: error_q_reader, which goes from('jms').
route 2: a timed route that fires every 20 mins
route 2 can use a control bus component to start the route.
from('timer?20mins') // or whatever the timer syntax is...
.to("controlbus:route?routeId=route1&action=start")
The tricky part here is knowing when to stop the route. Do you leave it run for 5 mins? Do you want to stop it once the messages are all consumed? There's probably a way to run another route that can check the queue depth (say every 1 min or so), and if it's 0 then shutdown route 1, you might get it to work, but I can assure you this will get messy as you try to deal with a number of async operations.
You could also try something more exotic, like a custom QueueBrowseStrategy which can fire an event to shutdown route 1 when there are no messages on the queue.
I built a customer bean to drain a queue and close, but it's not a very elegant solution, and I'd love to find a better one.
public class TriggeredPollingConsumer {
private ConsumerTemplate consumer;
private Endpoint consumerEndpoint;
private String endpointUri;
private ProducerTemplate producer;
private static final Logger logger = Logger.getLogger( TriggeredPollingConsumer.class );
public TriggeredPollingConsumer() {};
public TriggeredPollingConsumer( ConsumerTemplate consumer, String endpoint, ProducerTemplate producer ) {
this.consumer = consumer;
this.endpointUri = endpoint;
this.producer = producer;
}
public void setConsumer( ConsumerTemplate consumer) {
this.consumer = consumer;
}
public void setProducer( ProducerTemplate producer ) {
this.producer = producer;
}
public void setConsumerEndpoint( Endpoint endpoint ) {
consumerEndpoint = endpoint;
}
public void pollConsumer() throws Exception {
long count = 0;
try {
if ( consumerEndpoint == null ) consumerEndpoint = consumer.getCamelContext().getEndpoint( endpointUri );
logger.debug( "Consuming: " + consumerEndpoint.getEndpointUri() );
consumer.start();
producer.start();
while ( true ) {
logger.trace("Awaiting message: " + ++count );
Exchange exchange = consumer.receive( consumerEndpoint, 60000 );
if ( exchange == null ) break;
logger.trace("Processing message: " + count );
producer.send( exchange );
consumer.doneUoW( exchange );
logger.trace("Processed message: " + count );
}
producer.stop();
consumer.stop();
logger.debug( "Consumed " + (count - 1) + " message" + ( count == 2 ? "." : "s." ) );
} catch ( Throwable t ) {
logger.error("Something went wrong!", t );
throw t;
}
}
}
You configure the bean, and then call the bean method from your timer, and configure a direct route to process the entries from the queue.
from("timer:...")
.beanRef("consumerBean", "pollConsumer");
from("direct:myRoute")
.to(...);
It will then read everything in the queue, and stop as soon as no entries arrive within a minute. You might want to reduce the minute, but I found a second meant that if JMS as a bit slow, it would time out halfway through draining the queue.
I've also been looking at the sjms-batch component, and how it might be used with with a pollEnrich pattern, but so far I haven't been able to get that to work.
I have solved that by using my application as a CronJob in a MicroServices approach, and to give it the power of gracefully shutting itself down, we may set the property camel.springboot.duration-max-idle-seconds. Thus, your JMS consumer route keeps simple.
Another approach would be to declare a route to control the "lifecycle" (start, sleep and resume) of your JMS consumer route.
I would strongly suggest that you use the first approach.
If you use ActiveMQ you can leverage the Scheduler feature of it.
You can delay the delivery of a message on the broker by simply set the JMS property AMQ_SCHEDULED_DELAY to the number of milliseconds of the delay. Very easy in the Camel route
.setHeader("AMQ_SCHEDULED_DELAY", 60000)
It is not exactly what you look for because it does not drain a queue every 30 minutes, but instead delays every individual message for 30 minutes.
Notice that you have to enable the schedulerSupport in your broker configuration. Otherwise the delay properties are ignored.
<broker brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
...
</broker>
You should consider Aggregation EIP
from(URI_WAITING_QUEUE)
.aggregate(new GroupedExchangeAggregationStrategy())
.constant(true)
.completionInterval(TIMEOUT)
.to(URI_PROCESSING_BATCH_OF_EXCEPTIONS);
This example describes the following rules: all incoming in URI_WAITING_QUEUE objects will be grouped into List. constant(true) is a grouping condition (wihout any). And every TIMEOUT period (in millis) all grouped objects will be passed into URI_PROCESSING_BATCH_OF_EXCEPTIONS queue.
So the URI_PROCESSING_BATCH_OF_EXCEPTIONS queue will deal with List of objects to process. You can apply Split EIP to split them and to process one by one.

Reading and removing Exchange from SEDA queue in Camel

About SEDA component in Camel, anybody knows if a router removes the Exchange object from the queue when routing it? My router is working properly, but I'm afraid it keeps the Exchange objects in the queue, so my queue will be continuously growing...
This is my router:
public class MyRouter extends RouteBuilder {
#Override
public void configure() {
from("seda:input")
.choice()
.when(someValue)
.to("bean:someBean?method=whatever")
.when(anotherValue)
.to("bean:anotherBean?method=whatever");
}
}
If not, does anybody know how to remove the Exchange object from the queue once it has been routed or processed (I am routing the messages to some beans in my application, and they are working correctly, the only problem is in the queue).
Another question is, what happens if my input Exchange does not match any of the choice conditions? Is it kept in the queue as well?
Thanks a lot in advance.
Edited: after reading Claus' answer, I have added the end() method to the router. But my problem persists, at least when testing the seda and the router together. I put some messages in the queue, mocking the endpoints (which are receiving the messages), but the queue is getting full every time I execute the test. Maybe I am missing something. This is my test:
#Test
public void test() throws Exception {
setAdviceConditions(); //This method sets the advices for mocking the endpoints
Message message = createMessage("text", "text", "text"); //Body for the Exchange
for (int i = 0; i < 10; i++) {
template.sendBody("seda:aaa?size=10", message);
}
template.sendBody("seda:aaa?size=10", message); //java.lang.IllegalStateException: Queue full
}
Thanks!!
Edited again: after checking my router, I realised of the problem, I was writing to a different endpoint than the one the router was reading from (facepalm)
Thank you Claus for your answer.
1)
Yes when a Exchange is routed from a SEDA queue its removed immediately. The code uses poll() to poll and take the top message from the SEDA queue.
SEDA is in-memory based so yes the Exchanges is stored on the SEDA queue in-memory. You can configure a queue size so the queue can only hold X messages. See the SEDA docs at: http://camel.apache.org/seda
There is also JMX operations where you can purge the queue (eg empty the queue) which you can use from a management console.
2)
When the choice has no predicates that matches, then nothing happens. You can have an otherwise to do some logic in these cases if you want.
Also mind that you can continue route after the choice, eg
#Override
public void configure() {
from("seda:input")
.choice()
.when(someValue)
.to("bean:someBean?method=whatever")
.when(anotherValue)
.to("bean:anotherBean?method=whatever")
.end()
.to("bean:allGoesHere");
}
eg in the example above, we have end() to indicate where the choice ends. So after that all the messages goes there (also the ones that didnt match any predicates)

Resources