Is there a way to send messages to topic only when received from external system? - spring-boot

I have an listener which is listening for UDP packets and after receiving and processing that data want to stream it to a topic (currently Kafka).
I have managed to run a sample program of Spring Cloud Stream Kafka Binder producer.
#Bean
public Supplier<PacketDataPojo> data() {
return () -> {
PacketDataPojo pdp = new PacketDataPojo(UUID.randomUUID().toString());
log.info("Current data {}", pdp);
return pdp;
};
}
application.properties
spring.cloud.function.definition=data
spring.cloud.stream.bindings.data-out-0.destination=data-stream
Now as it is generating data with some scheduled interval, how can I make Supplier to stream data after packet processing is completed.
Thanks

I believe the StreamBridge will do the trick for you - https://docs.spring.io/spring-cloud-stream/docs/3.1.5/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources
So, you may not need Supplier for your case

Related

Spring AMQP AsyncRabbitTemplate Doesn't Send Message In Delay Time

I'm trying to send delayed messages on RabbitMQ with Spring AMQP.
I'm defining MessageProperties like this:
MessageProperties delayedMessageProperties = new MessageProperties();
delayedMessageProperties.setDelay(45000);
I'm defining the message which should be send in delay time like this:
org.springframework.amqp.core.Message amqpDelayedMessage = org.springframework.amqp.core.MessageBuilder.withBody(objectMapper.writeValueAsString(reversalMessage).getBytes())
.andProperties(reversalMessageProperties).build();
And then, If I send this message with RabbitTemplate, there is no problem. Message is being sent in defined delay time.
rabbitTemplate.convertSendAndReceiveAsType("delay-exchange",delayQueue, amqpDelayedMessage, new ParameterizedTypeReference<org.springframework.amqp.core.Message>() {
});
But I need to send this message asynchronously because I need not to block any other message in the system and to get more performance and if I use asyncRabbitTemplate, message is being delivered immediately. There is no delay.
asyncRabbitTemplate.convertSendAndReceiveAsType("delay-exchange",delayQueue, amqpDelayedMessage, new ParameterizedTypeReference<org.springframework.amqp.core.Message>() {
});
How can I obtain the delay with asnycRabbitTemplate?
This is probably a bug; please open an issue on GitHub.
The convertSendAndReceive() methods are not intended to send and receive raw Message objects.
In the case of the RabbitTemplate the conversion is skipped if the object is already a Message; there are some cases where this skip is not performed with the async template; please edit the question to show your template configuration.
However, since you are dealing with Message directly, don't use the convert... methods at all, simply use
public RabbitMessageFuture sendAndReceive(String exchange, String routingKey, Message message) {

Spring Integration - Concurrent access to SFTP outbound gateway GET w/ STREAM and accessing the response from Queue Channel

Context
Per the spring docs https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#using-the-get-command, the GET command on the SFTP outbound gateway with STREAM option would return the input stream corresponding to the file passed in the input channel.
We could configure an integration flow similar to the recommendation at
https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#configuring-with-the-java-dsl-3
#Bean
public QueueChannelSpec remoteFileOutputChannel() {
return MessageChannels.queue();
}
#Bean
public IntegrationFlow sftpGetFlow() {
return IntegrationFlows.from("sftpGetInputChannel")
.handle(Sftp.outboundGateway(sftpSessionFactory(),
AbstractRemoteFileOutboundGateway.Command.GET, "payload")
.options(AbstractRemoteFileOutboundGateway.Option.STREAM))
.channel("remoteFileOutputChannel")
.get();
}
I plan to obtain the input stream from the caller similar to the response provided in the edits in the question here No Messages When Obtaining Input Stream from SFTP Outbound Gateway
public InputStream openFileStream(final int retryCount, final String filename, final String directory)
throws Exception {
InputStream is = null;
for (int i = 1; i <= retryCount; ++i) {
if (sftpGetInputChannel.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
is = getInputStream();
if (is != null) {
break;
} else {
logger.info("Failed to obtain input stream so attempting retry " + i + " of " + retryCount);
Thread.sleep(ftpTimeout);
}
}
}
return is;
}
private InputStream getInputStream() {
Message<?> msgs = stream.receive(ftpTimeout);
if (msgs == null) {
return null;
}
InputStream is = (InputStream) msgs.getPayload();
return is;
}
I would like to pass the input stream to the item reader that is part of a Spring Batch job. The job would read from the input stream and close the stream/session upon completion.
Question
The response from the SFTP outbound gateway is sent to a queue channel. If there are concurrent GET requests to the gateway from multiple jobs/clients, how does the consumer pick the appropriate input stream from the blocking queue in the queue channel? The solution I could think of
Mark getInputStream as synchronized. This would ensure that only one consumer can send commands to the outbound gateway. Since all we are doing is returning a reference to the input stream, it is not a huge performance bottleneck. We could also set the capacity of the queue channel as an additional measure.
This is not an ideal solution because it is very much possible for other devs to bypass the synchronized method here and interact with the outbound gateway. We run the risk of fetching an incorrect stream.
The underlying SFTP client implementation used by Spring doesn't impose any such restrictions so I am seeking a Spring integration solution that can overcome this problem.
Does the GET with STREAM return any headers with the input file name from the payload that can be used by the client to make sure that the stream corresponds to the requested file? This would require peeking + inspection in to the queue before popping a message out of the queue. Not ideal, I think.
Is there a way to pass the response queue channel name as a parameter from the caller?
Appreciate any insights.
Yes, simply set the replyChannel header with a new QueueChannel for each request and terminate the flow with the gateway; if there is no output channel, the ob gateway sends the reply to the header channel.
That is similar to how inbound gateways work.

Listener for NATS JetStream

Can some one help how to configure NATS jet stream subscription in spring boot asynchronously example: looking for an equivalent annotation like #kafkalistener for Nats jetstream
I am able to pull the messages using endpoint but however when tried to pull messages using pushSubscription dispatcherhandler is not invoked. Need to know how to make the listener to be active and consume messages immediately once the messages are published to the subject.
Any insights /examples regarding this will be helpful, thanks in advance.
I don't know what is your JetStream retention policy, neither the way you want to subscribe. But I have sample code for WorkQueuePolicy push subscription, wish this will help you.
public static void subscribe(String streamName, String subjectKey,
String queueName, IMessageHandler iMessageHandler) throws IOException,
InterruptedException, JetStreamApiException {
long s = System.currentTimeMillis();
Connection nc = Nats.connect(options);
long e = System.currentTimeMillis();
logger.info("Nats Connect in " + (e - s) + " ms");
JetStream js = nc.jetStream();
Dispatcher disp = nc.createDispatcher();
MessageHandler handler = (msg) -> {
try {
iMessageHandler.onMessageReceived(msg);
} catch (Exception exc) {
msg.nak();
}
};
ConsumerConfiguration cc = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.maxDeliver(3)
.ackWait(Duration.ofMinutes(2))
.build();
PushSubscribeOptions so = PushSubscribeOptions.builder()
.stream(streamName)
.configuration(cc)
.build();
js.subscribe(subjectKey, disp, handler, false, so);
System.out.println("NatsUtil: " + durableName + "subscribe");
}
IMessageHandler is my custom interface to handle nats.io received messages.
First, configure the NATS connection. Here you will specify all your connection details like server address(es), authentication options, connection-level callbacks etc.
Connection natsConnection = Nats.connect(
new Options.Builder()
.server("nats://localhost:4222")
.connectionListener((connection, eventType) -> {})
.errorListener(new ErrorListener(){})
.build());
Then construct a JetStream instance
JetStream jetStream = natsConnection.jetStream();
Now you can subscribe to subjects. Note that JetStream consumers can be durable or ephemeral, can work according to push or pull logic. Please refer to NATS documentation (https://docs.nats.io/nats-concepts/jetstream/consumers) to make the appropriate choice for your specific use case. The following example constructs a durable push consumer:
//Subscribe to a subject.
String subject = "my-subject";
//queues are analogous to Kafka consumer groups, i.e. consumers belonging
//to the same queue (or, better to say, reading the same queue) will get
//only one instance of each message from the corresponding subject
//and only one of those consumers will be chosen to process the message
String queueName = "my-queue";
//Choosing delivery policy is analogous to setting the current offset
//in a partition for a consumer or consumer group in Kafka.
DeliverPolicy deliverPolicy = DeliverPolicy.New;
PushSubscribeOptions subscribeOptions = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.deliverPolicy(deliverPolicy)
.buildPushSubscribeOptions();
Subscription subscription = jetStream.subscribe(
subject,
queueName,
natsConnection.createDispatcher(),
natsMessage -> {
//This callback will be called for incoming messages
//asynchronously. Every subscription configured this
//way will be backed by its own thread, that will be
//used to call this callback.
},
true, //true if you want received messages to be acknowledged
//automatically, otherwise you will have to call
//natsMessage.ack() manually in the above callback function
subscribeOptions);
As for the declarative API (i.e. some form of #NatsListener annotation analogous to #KafkaListener from Spring for Apache Kafka project), there is none available out of the box in Spring. If you feel like you absolutely need it, you can write one yourself, if you are familiar with Spring BeanPostProcessor-s or other extension mechanism that can help to do that. Alternatively you can refer to 3rd party libs, it looks like a bunch of people (including myself) felt a bit uncomfortable when switching from Kafka to NATS, so they tried to bring the usual way of doing things with them from the Kafka world. Some examples can be found on github:
https://github.com/linux-china/nats-spring-boot-starter,
https://github.com/dstrelec/nats
https://github.com/amalnev/declarative-nats-listeners
There may be others.

Listen to another message only when I am done with my current message in Kafka

I am building a Springboot application using Spring Kafka where I am getting messages from a topic. I have to modify those messages and then produce them to another topic. I don't want to consume any other message till I have processed my current one. How can I achieve this?
#KafkaListener(
topics = "${event.topic.name}",
groupId = "${event.topic.group.id}",
containerFactory = "eventKafkaListenerContainerFactory"
)
public void consume(Event event) {
logger.info(String.format("Event created(from consumer)-> %s", event));
}
"event" is a json object which I am receiving as a message.
See https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#consumerconfigs_max.poll.records:
max.poll.records
The maximum number of records returned in a single call to poll().
Type: int
Default: 500
With Spring Boot you can configure it as this property:
spring.kafka.consumer.maxPollRecords
So, you set it to 1 and no more records are going to be polled from this consumer until you return from your #KafkaListener method.

How to aggrate messages from a queue Channel with using spring integration DSL?

i define a queue channel
#Bean("mail-action-laundry-list-channel")
public MessageChannel mailRecipientActionMessageChannel() {
return new QueueChannel(20);
}
the flow below, i will aggrate messages from the queue channel, i tried this:
#Bean
public IntegrationFlow mailRecipientActionLaundryListMessageFlow(#Qualifier("laundryListMessageHandler") MessageHandler laundryListMessageHandler) {
return IntegrationFlows.from("mail-action-laundry-list-channel")
.log("--> laundry list messages::")
.aggregate(aggregatorSpec -> aggregatorSpec
.correlationExpression("#this.payload.email")
.releaseExpression("#this.size() == 5")
.messageStore(new SimpleMessageStore(100))
.groupTimeout(2000))
.transform(laundryListMessageToItemProcessDtoTransformer())
.handle(laundryListMessageHandler)
.get();
}
but why it aggrate first 5 messages from the channel always, and aggrate other message no longer
You need to configure expireGroupsUponCompletion(true) on the aggregator:
When set to true (default false), completed groups are removed from the message store, allowing subsequent messages with the same correlation to form a new group. The default behavior is to send messages with the same correlation as a completed group to the discard-channel.
Looks like your subsequent messages from the queue has the same email property. Therefore an aggregator can't form a new group for the same correlation key.
https://docs.spring.io/spring-integration/docs/5.0.3.RELEASE/reference/html/messaging-routing-chapter.html#aggregator-config

Resources