Spring integration (Manual Acking) - spring

I want to create a simple IntegrationFlow with Spring integration, and I am having difficulties.
I want to create an integration flow that takes messages from a queue in Rabbit Mq and posts the messages to an endpoint Rest. I want to ack manually depending on the results of the post that I will make.
A typical behavior of the integration Flow would be like this:
I receive a message in the queue.
Spring detects it, takes the message and posts it in the Rest endpoint.
The end point responds with a 200 code.
Spring integration ack the message.
If the endpoint responds with an error code I want to be able to nack or retry.
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
RestTemplate restTemplate = new RestTemplate();
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.setQueueNames(BOUTIQUE_QUEUE_NAME);
/* Get Message from RabbitMQ */
return IntegrationFlows.from(Amqp.inboundAdapter(container))
.handle(msg ->
{
String msgString = new String((byte[]) msg.getPayload(), StandardCharsets.UTF_8);
HttpEntity<String> requestBody = new HttpEntity<String>(msgString, headers);
restTemplate.postForObject(ENDPOINT_LOCAL_URL, requestBody, String.class);
System.out.println(msgString);
})
.get();

You don't need to use manual acknowledge mode for this use case; if he rest call returns normally, the container will ack the message; if an exception is thrown, the container will nack the message and it will be redelivered.
If you use manual acks, the Channel and deliveryTag are available in the AmqpHeaders.CHANNEL and AmqpHeaders.DELIVERY_TAG message headers and you can call basicAck or basicReject on the channel (you will have to add an error channel to the inbound adapter to handle errors.

Related

Spring Webflux: how to manually write headers and body?

I'm using Spring WebFlux for my project that is intended to work as pub/sub service: http clients connect to it and wait for events (like PUSH or SSE).
I need to manually write headers and body to the response without using ServerResponse.
I need to do it manually because I'm implementing an SSE server and I must send custom headers into the response before any event actually arrives.
I'm trying to do it this way:
#Bean
RouterFunction<ServerResponse> getStuff() {
return route(GET("/stuff"),
request -> {
final ServerWebExchange exchange = request.exchange();
final byte[] bytes = "data".getBytes(StandardCharsets.UTF_8);
final DataBuffer buffer =exchange.getResponse().bufferFactory().wrap(bytes);
return exchange.getResponse().writeWith(Flux.just(buffer));
}
);
But it does not work because writeWith() returns Mono<Void> and getStuff() must return RouterFunction. Can anybody help me find a way around this?

spring amqp (rabbitmq) and sending to DLQ when exception occurs

I am using org.springframework.boot:spring-boot-starter-amqp:2.6.6 .
According to the documentation, I set up #RabbitListener - I use SimpleRabbitListenerContainerFactory and the configuration looks like this:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ObjectMapper om) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
factory.setConcurrentConsumers(rabbitProperties.getUpdater().getConcurrentConsumers());
factory.setMaxConcurrentConsumers(rabbitProperties.getUpdater().getMaxConcurrentConsumers());
factory.setMessageConverter(new Jackson2JsonMessageConverter(om));
factory.setAutoStartup(rabbitProperties.getUpdater().getAutoStartup());
factory.setDefaultRequeueRejected(false);
return factory;
}
The logic of the service is to receive messages from rabbitmq, contact an external service via the rest API (using rest template) and put some information into the database based on the results of the response (using spring data jpa). The service implemented it successfully, but during testing it ran into problems that if any exceptions occur during the work of those thrown up the stack, the message is not sent to the configured dlq, but simply hangs in the broker as unacked. Can you please tell me how you can tell spring amqp that if any error occurs, you need to redirect the message to dlq?
The listener itself looks something like this:
#RabbitListener(
queues = {"${rabbit.updater.consuming.queue.name}"},
containerFactory = "rabbitListenerContainerFactory"
)
#Override
public void listen(
#Valid #Payload MessageDTO message,
Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag
) {
log.debug(DebugMessagesConstants.RECEIVED_MESSAGE_FROM_QUEUE, message, deliveryTag);
messageUpdater.process(message);
channel.basicAck(deliveryTag, false);
log.debug(DebugMessagesConstants.PROCESSED_MESSAGE_FROM_QUEUE, message, deliveryTag);
}
In rabbit managment it look something like this:
enter image description here
and unacked will hang until the queue consuming application stops
See error handling documentation: https://docs.spring.io/spring-amqp/docs/current/reference/html/#annotation-error-handling.
So, you just don't do an AcknowledgeMode.MANUAL and rely on the Dead Letter Exchange configuration for those messages which are rejected in case of error.
Or try to use a this.channel.basicNack(deliveryTag, false, false) in case of messageUpdater.process(message); exception...

Camel url listener

I want to make a route that will be triggered from client request.
For example I have a route http://localhost:8080/get where I have some json object.
I want to create a route when I send a request to http://localhost:8080/get to send the data to ActiveMQ. Like event listener. I want to send to activeMq only when there is request to that URL.
I have searched that I cant use http or http4 in from() and that makes me a problem. I have tried from timer to url and then to activemq, but that is not what I really need.
This is what I have tried.
#GetMapping(value = "/shit")
public String getIt(#RequestParam(value = "url") String url, #RequestParam(value = "activemq") String activeMq) throws Exception {
CamelContext camelContext = new DefaultCamelContext();
RouteBuilder builder = new RouteBuilder() {
public void configure() {
from(url).convertBodyTo(String.class)
.process(exchange -> {
String body = exchange.getIn()
.getBody()
.toString();
System.out.println("The body is: " + body);
})
.pollEnrich()
.simple("activemq://" + activeMq);
}
};
builder.addRoutesToCamelContext(camelContext);
camelContext.start();
return "";
}
And I want the route to be active untill I stop it.
Yeah, camel-http4 is to produce only, it is not usable as a consumer because it is based an Apache HTTP client.
But you don't need special things like a timer or enricher. You can just use another Camel http-component that can act as a server. For example camel-jetty.
After a long discussion I finally realized that you would like to "branch off" requests within your other, already existing applications, i.e. you would like to send an incoming request, additionally to process them, to ActiveMQ.
Unfortunately you cannot do this from outside your applications because you do not know about incoming requests in other applications without modifying those other applications.
However, if you can modify your other applications so that they send their payloads to your new Camel application, the route would be quite simple:
from("jetty:http://localhost:[port]/yourApp")
.to("activemq:queue:myQueueName")
This route acts as a webserver for /yourApp
and sends the message body to a message queue of the configured ActiveMQ broker.

How to retry a failed call to a Webflux.outboundgateway in Spring integration

I have a spring integration flow defined in the flow DSL syntax. One of my handlers is a Webflux.outboundGateway. When the remote URI is not accessible, an exception is thrown and sent to the "errorChannel". I'm trying to have the flow to retry, but so far with no success (the call is never retried). Here is what my configuration looks like:
#Bean
public IntegrationFlow retriableFlow() {
return IntegrationFlows
.from(...)
.handle(
WebFlux.outboundGateway(m ->
UriComponentsBuilder.fromUriString(remoteGateway + "/foo/bar")
.build()
.toUri(), webClient)
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class)
.replyPayloadToFlux(true), e -> e.advice(retryAdvice())
)
// [ ... ]
.get();
}
#Bean
public Advice retryAdvice() {
RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
ExponentialBackOffPolicy retryPolicy = new ExponentialBackOffPolicy();
retryPolicy.setInitialInterval(1000);
retryPolicy.setMaxInterval(20000);
retryTemplate.setBackOffPolicy(retryPolicy);
advice.setRetryTemplate(retryTemplate);
return advice;
}
Should I be using something different than the RequestHandlerRetryAdvice? If so, what should it be?
Webflux is, by definition, async, which means the Mono (reply) is satisfied asynchronously when the request completes/fails, not on the calling thread. Hence the advice won't help because the "send" part of the request is always successful.
You would have to perform retries via a flow on the error channel (assigned somewhere near the start of the flow). With, perhaps, some header indicating how many times you have retried.
The ErrorMessage has properties failedMessage and cause; you can resend the failedMessage.
You could turn off async so the calling thread blocks, but that really defeats the whole purpose of using WebFlux.

Kafka Spring Integration: Headers not coming for kafka consumer

I am using Kafka Spring Integration for publishing and consuming messages using kafka. I see Payload is properly passed from producer to consumer, but the header information is getting overridden somewhere.
#ServiceActivator(inputChannel = "fromKafka")
public void processMessage(Message<?> message) throws InterruptedException,
ExecutionException {
try {
System.out.println("Headers :" + message.getHeaders().toString());
}
} catch (Exception e) {
e.printStackTrace();
}
}
I get following headers:
Headers :{timestamp=1440013920609, id=f8c645f7-677b-ec32-dad0-a7b79082ef81}
I am constructing the message at producer end like this:
Message<FeelDBMessage> message = MessageBuilder
.withPayload(samplePayloadObj)
.setHeader(KafkaHeaders.MESSAGE_KEY, "key")
.setHeader(KafkaHeaders.TOPIC, "sampleTopic").build();
// publish the message
publisher.publishMessage(message);
and below is the header info at producer:
headers={timestamp=1440013914085, id=c4159c1c-2c67-634b-ef8d-3fb026b1172e, kafka_messageKey=key, kafka_topic=sampleTopic}
Any idea why the Headers are overridden by a different value?
Just because by default Framework uses the immutable GenericMessage.
Any manipulation to the existing message (e.g. MessageBuilder.withPayload) will produce a new GenericMessage instance.
From other side Kafka doesn't support any headers abstraction like JMS or AMQP. That's why KafkaProducerMessageHandler just do this when it publishes a message to Kafka:
this.kafkaProducerContext.send(topic, partitionId, messageKey, message.getPayload());
As you see it doesn't send headers at all. So, other side (consumer) just deals with only message from the topic as a payload and some system options as headers like topic, partition, messageKey.
In two words: we don't transfer headers over Kafka because it doesn't support them.

Resources