Spring Integration - Concurrent access to SFTP outbound gateway GET w/ STREAM and accessing the response from Queue Channel - spring

Context
Per the spring docs https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#using-the-get-command, the GET command on the SFTP outbound gateway with STREAM option would return the input stream corresponding to the file passed in the input channel.
We could configure an integration flow similar to the recommendation at
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#configuring-with-the-java-dsl-3
#Bean
public QueueChannelSpec remoteFileOutputChannel() {
return MessageChannels.queue();
}
#Bean
public IntegrationFlow sftpGetFlow() {
return IntegrationFlows.from("sftpGetInputChannel")
.handle(Sftp.outboundGateway(sftpSessionFactory(),
AbstractRemoteFileOutboundGateway.Command.GET, "payload")
.options(AbstractRemoteFileOutboundGateway.Option.STREAM))
.channel("remoteFileOutputChannel")
.get();
}
I plan to obtain the input stream from the caller similar to the response provided in the edits in the question here No Messages When Obtaining Input Stream from SFTP Outbound Gateway
public InputStream openFileStream(final int retryCount, final String filename, final String directory)
throws Exception {
InputStream is = null;
for (int i = 1; i <= retryCount; ++i) {
if (sftpGetInputChannel.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
is = getInputStream();
if (is != null) {
break;
} else {
logger.info("Failed to obtain input stream so attempting retry " + i + " of " + retryCount);
Thread.sleep(ftpTimeout);
}
}
}
return is;
}
private InputStream getInputStream() {
Message<?> msgs = stream.receive(ftpTimeout);
if (msgs == null) {
return null;
}
InputStream is = (InputStream) msgs.getPayload();
return is;
}
I would like to pass the input stream to the item reader that is part of a Spring Batch job. The job would read from the input stream and close the stream/session upon completion.
Question
The response from the SFTP outbound gateway is sent to a queue channel. If there are concurrent GET requests to the gateway from multiple jobs/clients, how does the consumer pick the appropriate input stream from the blocking queue in the queue channel? The solution I could think of
Mark getInputStream as synchronized. This would ensure that only one consumer can send commands to the outbound gateway. Since all we are doing is returning a reference to the input stream, it is not a huge performance bottleneck. We could also set the capacity of the queue channel as an additional measure.
This is not an ideal solution because it is very much possible for other devs to bypass the synchronized method here and interact with the outbound gateway. We run the risk of fetching an incorrect stream.
The underlying SFTP client implementation used by Spring doesn't impose any such restrictions so I am seeking a Spring integration solution that can overcome this problem.
Does the GET with STREAM return any headers with the input file name from the payload that can be used by the client to make sure that the stream corresponds to the requested file? This would require peeking + inspection in to the queue before popping a message out of the queue. Not ideal, I think.
Is there a way to pass the response queue channel name as a parameter from the caller?
Appreciate any insights.

Yes, simply set the replyChannel header with a new QueueChannel for each request and terminate the flow with the gateway; if there is no output channel, the ob gateway sends the reply to the header channel.
That is similar to how inbound gateways work.

Related

Is there a way to send messages to topic only when received from external system?

I have an listener which is listening for UDP packets and after receiving and processing that data want to stream it to a topic (currently Kafka).
I have managed to run a sample program of Spring Cloud Stream Kafka Binder producer.
#Bean
public Supplier<PacketDataPojo> data() {
return () -> {
PacketDataPojo pdp = new PacketDataPojo(UUID.randomUUID().toString());
log.info("Current data {}", pdp);
return pdp;
};
}
application.properties
spring.cloud.function.definition=data
spring.cloud.stream.bindings.data-out-0.destination=data-stream
Now as it is generating data with some scheduled interval, how can I make Supplier to stream data after packet processing is completed.
Thanks
I believe the StreamBridge will do the trick for you - https://docs.spring.io/spring-cloud-stream/docs/3.1.5/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources
So, you may not need Supplier for your case

How to consume message from RabbitMQ dead letter queue one by one

The requirement is like to process the messages from dead letter queue by exposed a REST service API(Spring Boot).
So that once REST service is called, one message will be consumed from the DL queue and will publish in the main queue again for processing.
#RabbitListener(queues = "QUEUE_NAME") consumes the message immediately which is not required as per the scenario. The message only has to be consumed by the REST service API.
Any suggestion or solution?
I do not think RabbitListener will help here.
However you could implement this behaviour manually.
Spring Boot automatically creates RabbitMq connection factory so you could use it. When http call is made just read single message from the queue manually, you could use basic.get to synchronously get just one message:
#Autowire
private ConnectionFactory factory
void readSingleMessage() {
Connection connection = null;
Channel channel = null;
try {
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, true, false, false, null);
GetResponse response = channel.basicGet(QUEUE_NAME, true);
if (response != null) {
//Do something with the message
}
} finally {
//Check if not null
channel.close();
connection.close();
}
}
If you are using Spring; you can avoid all the boilerplate in the other answer using RabbitTemplate.receive(...).
EDIT
To manually ack/reject the message, use the execute method instead.
template.execute(channel -> {
GetResponse got = channel.basicGet("foo", false);
// ...
channel.basicAck(got.getEnvelope().getDeliveryTag(), false);
return null;
});
It's a bit lower level, but again, most of the boilerplate is taken care of for you.

Camel JMS Asynchronous Request Reply

I am trying to implement a Camel Route that Reads a request message from a Remote systems queue (System.A.out) The route looks at the message body and dynamically routes it to another systems in queue (System.B.in) This Route is then complete, and waits for the next message on its from queue (Currently it blocks and waits for a response on a temp queue)
System.B Reads its in queue (System.B.in, not always a camel route) processes the message and drops a response on its out queue (System.B.out) System.B uses the JMSMessageID from the Request message as the JMSCorrelationID on its response, that is all it keeps from the request.
A Camel Route (Similar to the System.A.out, but listening on System.B.out) picks up the response message and using the JMSCorrelationID (The request would not have had a JMSCorrelationID and thus would be routed by message body) finds the request's JMSReplyTo Queue (System.A.in) and drops the response on System.A's in queue for System.A to process.
I am using SpringBoot and Camel 2.18.3, the message queue is IMB MQ version 8
My route looks like this:
#Override
public void configure() throws Exception {
//#formatter:off
Predicate validRoute = header("route-valid").isEqualTo(true);
Predicate inValidRoute = header("route-valid").isEqualTo(false);
Predicate splitRoute = header("route-split").isEqualTo(true);
Predicate singleRoute = header("route-split").isEqualTo(false);
Predicate validSplitRoute = PredicateBuilder.and(validRoute, splitRoute);
Predicate validSingelRoute = PredicateBuilder.and(validRoute, singleRoute);
from(endpoint(incomingURI)).routeId(routeId)
.process(exchange -> {
exchange.getIn().setHeader("route-source", format("%s-%s", incomingURI, routeId));
})
.to(endpoint(format("bean:evaluateIncomingMessageService?method=routeMessage(*, %s)", replyToURI)))
.choice()
.when(validSingelRoute)
.log(DEBUG, "Creating a Single route")
.to(endpoint("bean:messageCoalitionService?method=saveInstruction(*)"))
.setExchangePattern(ExchangePattern.InOut)
.toD("${header.route-recipients}")
.when(inValidRoute)
.log(DEBUG, "a.b.test", format("Incoming message [%s] failed evaluation: %s", incomingURI, body()))
.to(endpoint(deadLetterURI))
.routeId(format("%s-%s", incomingURI, routeId))
.when(validSplitRoute)
.log(DEBUG, "Creating a Split route")
.to(endpoint("bean:messageCoalitionService?method=saveInstructions(*)"))
.setExchangePattern(ExchangePattern.InOut)
.multicast()
.toD("${header.route-recipients}").endChoice()
.otherwise()
.log(DEBUG, "a.b.test", format("Incoming message [%s] failed evaluation: %s", incomingURI, body()))
.to(endpoint(deadLetterURI))
.routeId(format("%s-%s", incomingURI, routeId));
The Spring Bean evaluateIncomingMessageService decides if the message is a Request (No Correlation ID) or a Response and sets routing headers for the Request. I hoped Camel would automatically route responses to the Request.JMSReplyTo Queue, if not how can one do this?
replyToURI is configured in the Camel Route builder, if the route Listens on System.A.out its replyToURI will always be System.A.in.
evaluateIncomingMessageService.routeMessage looks like this:
public void routeMessage(final Exchange exchange, final String replyToURI) {
String correlationId = exchange.getIn().getHeader("JMSCorrelationID", String.class);
if (correlationId != null) {
log.debug("Processing Message Response with JMSCorrelationID [{}]", correlationId);
exchange.getIn().setHeader("JMSReplyTo", replyToURI);
} else {
// Request Messages have nave NO correlationId
log.debug("Processing Message Request with MessageID [{}] and JMSMessageID: [{}]",
exchange.getIn().getMessageId(),
exchange.getIn().getHeader("JMSMessageID") != null ? exchange.getIn().getHeader("JMSMessageID").toString() : exchange.getIn().getMessageId());
String message = exchange.getIn().getBody(String.class);
Set<ContentBasedRoute> validRoutes = contentBasedRouting
.stream().filter(
routeEntity -> Pattern.compile(
routeEntity.getRegularExpression(), DOTALL).matcher(message).matches()).collect(Collectors.toSet());
if (validRoutes.isEmpty()) {
log.warn("No valid routes found for message: [{}] ", message);
exchange.getIn().setHeader("route-valid", false);
} else {
HashMap<String, ContentBasedRoute> uniqueRoutes = new HashMap<>();
validRoutes.stream().forEach(route -> uniqueRoutes.putIfAbsent(route.getDestination(), route));
exchange.getIn().setHeader("route-valid", true);
exchange.getIn().setHeader("route-count", uniqueRoutes.size());
exchange.getIn().setHeader("JMSReplyTo", replyToURI);
//if (exchange.getIn().getHeader("JMSMessageID") == null) {
// exchange.getIn().setHeader("JMSMessageID", exchange.getIn().getMessageId());
//}
if (uniqueRoutes.size() > 1) {
log.debug("Building a split route");
StringBuilder routes = new StringBuilder();
StringBuilder routeIds = new StringBuilder();
StringBuilder routeRegex = new StringBuilder();
uniqueRoutes.keySet().stream().forEach(i -> routes.append(i).append(","));
uniqueRoutes.values().stream().forEach(j -> routeIds.append(j.getRouteId()).append(","));
uniqueRoutes.values().stream().forEach(k -> routeRegex.append(k.getRegularExpression()).append(","));
routes.deleteCharAt(routes.length() - 1);
routeIds.deleteCharAt(routeIds.length() - 1);
routeRegex.deleteCharAt(routeRegex.length() - 1);
exchange.getIn().setHeader("route-split", true);
exchange.getIn().setHeader("route-uuid", routeIds.toString());
exchange.getIn().setHeader("route-regex", routeRegex.toString());
exchange.getIn().setHeader("route-recipients", routes.toString());
} else {
exchange.getIn().setHeader("route-split", false);
exchange.getIn().setHeader("route-uuid", uniqueRoutes.values().iterator().next().getRouteId());
exchange.getIn().setHeader("route-regex", uniqueRoutes.values().iterator().next().getRegularExpression());
exchange.getIn().setHeader("route-recipients", uniqueRoutes.values().iterator().next().getDestination());
}
}
}
}
The Bean messageCoalitionService simply saves the message body and headers so the messages can be reproduced and for auditing of the system.
I am not sure if I have gone about this incorrectly, Should I be using the Camel Async API or do I need pipes to implement this? This pattern looks close to what I need http://camel.apache.org/async.html (Asynchronous Request Reply) Any Help would be great thanks.
In the end I implemented the above using Spring Integration. I was not able to find a way to retrieve the Message ID of the sent Message once the Camel Route had sent the message on which meant I had no way of tracking the Correlation ID when a response was sent back. Using the Camel InOut caused Camel to block and wait for a response which is also not what I wanted.
Thanks to lutalex for this solution:
http://forum.spring.io/forum/other-spring-related/remoting/30397-jmsmessageid-after-message-is-sent?p=745127#post745127

Receive method in JMS waiting for messages

I want a method to browse all messages from a messsage queue and can send it to another queue using jmstemplate with using Websphere queues(NOT MQ). I have tried using receive and it is able to retrieve all the messages from the queue but it is still waiting for another message. And the messages are being lost. It must be in a transaction
The Code I have Tried:
**String message = (String) jmsTemplate.receiveAndConvert();
System.out.print(message);
while ((message = (String) jmsTemplate.receiveAndConvert()) != null) {
messages.add(message);
}
return messages;
}**
The JMStemplate should be used for only synchronous read or sending message. For asychronous read use one of the listener implementation. Read here

Request-response pattern using Spring amqp library

everyone. I have an HTTP API for posting messages in a RabbitMQ broker and I need to implement the request-response pattern in order to receive the responses from the server. So I am something like a bridge between the clients and the server. I push the messages to the broker with specific routing-key and there is a Consumer for that messages, which is publishing back massages as response and my API must consume the response for every request. So the diagram is something like this:
So what I do is the following- For every HTTP session I create a temporary responseQueue(which is bound to the default exchange, with routing key the name of that queue), after that I set the replyTo header of the message to be the name of the response queue(where I will wait for the response) and also set the template replyQueue to that queue. Here is my code:
public void sendMessage(AbstractEvent objectToSend, final String routingKey) {
final Queue responseQueue = rabbitAdmin.declareQueue();
byte[] messageAsBytes = null;
try {
messageAsBytes = new ObjectMapper().writeValueAsBytes(objectToSend);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
MessageProperties properties = new MessageProperties();
properties.setHeader("ContentType", MessageBodyFormat.JSON);
properties.setReplyTo(responseQueue.getName());
requestTemplate.setReplyQueue(responseQueue);
Message message = new Message(messageAsBytes, properties);
Message receivedMessage = (Message)requestTemplate.convertSendAndReceive(routingKey, message);
}
So what is the problem: The message is sent, after that it is consumed by the Consumer and its response is correctly sent to the right queue, but for some reason it is not taken back in the convertSendAndReceived method and after the set timeout my receivedMessage is null. So I tried to do several things- I started to inspect the spring code(by the way it's a real nightmare to do that) and saw that is I don't declare the response queue it creates a temporal for me, and the replyTo header is set to the name of the queue(the same what I do). The result was the same- the receivedMessage is still null. After that I decided to use another template which uses the default exchange, because the responseQueue is bound to that exchange:
requestTemplate.send(routingKey, message);
Message receivedMessage = receivingTemplate.receive(responseQueue.getName());
The result was the same- the responseMessage is still null.
The versions of the amqp and rabbit are respectively 1.2.1 and 1.2.0. So I am sure that I miss something, but I don't know what is it, so if someone can help me I would be extremely grateful.
1> It's strange that RabbitTemplate uses doSendAndReceiveWithFixed if you provide the requestTemplate.setReplyQueue(responseQueue). Looks like it is false in your explanation.
2> To make it worked with fixed ReplyQueue you should configure a reply ListenerContainer:
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(rabbitConnectionFactory);
container.setQueues(responseQueue);
container.setMessageListener(requestTemplate);
3> But the most important part here is around correlation. The RabbitTemplate.sendAndReceive populates correlationId message property, but the consumer side has to get deal with it, too: it's not enough just to send reply to the responseQueue, the reply message should has the same correlationId property. See here: how to send response from consumer to producer to the particular request using Spring AMQP?
BTW there is no reason to populate the Message manually: You can just simply support Jackson2JsonMessageConverter to the RabbitTemplate and it will convert your objectToSend to the JSON bytes automatically with appropriate headers.

Resources