Single RabbitMQ queue and multiple routing key - spring

We've got an application that will be using RabbitMQ. The design is to use a single exchange single queue with multiple routing keys for multiple teams and they will communicate through this single queue.
I'm developing a java Application to just listen to that queue using a routingKey assigned to my team.
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "queue", durable = "true"),
exchange = #Exchange(value = "exchange", autoDelete = "false", type = "topic"),
key = "abc_rk"))
public void consumeMessagesFromRabbitMQ(Request request) throws InterruptedException {
System.out.println("Start:Request from RabbitMQ: " + request);
Thread.sleep(10000L);
System.out.println("End:Request from RabbitMQ: " + request);
}
Let's say the queue has 3 routingKey messages and out of them my application just want to listen to abc_rk. But when I run this code, it's not filtering out the other messages but instead irrespective of what I've set in "key = ?" it pulls all the messages from the queue.
Note that I can't change the design and use the separate queue for each routingKey.

RabbitMQ doesn't work that way (it has no concept of a message selector unlike JMS).
In fact, consumers know nothing about routing keys, only producers; the only reason you see it on #RabbitListener is to aid configuration.
To do what you want, you need to bind 3 different queues to the exchange with the respective routing keys.
Note that I can't change the design and use the separate queue for each routingKey.
You could add a MessagePostProcessor to the container (afterReceivePostProcessors) to discard the unwanted messages by returning null. That is the only mechanism the framework provides for filtering messages.
/**
* Set {#link MessagePostProcessor}s that will be applied after message reception, before
* invoking the {#link MessageListener}. Often used to decompress data. Processors are invoked in order,
* depending on {#code PriorityOrder}, {#code Order} and finally unordered.
* #param afterReceivePostProcessors the post processor.
* #since 1.4.2
* #see #addAfterReceivePostProcessors(MessagePostProcessor...)
*/
public void setAfterReceivePostProcessors(MessagePostProcessor... afterReceivePostProcessors) {
But the best solution is 3 queues.

Related

Listener for NATS JetStream

Can some one help how to configure NATS jet stream subscription in spring boot asynchronously example: looking for an equivalent annotation like #kafkalistener for Nats jetstream
I am able to pull the messages using endpoint but however when tried to pull messages using pushSubscription dispatcherhandler is not invoked. Need to know how to make the listener to be active and consume messages immediately once the messages are published to the subject.
Any insights /examples regarding this will be helpful, thanks in advance.
I don't know what is your JetStream retention policy, neither the way you want to subscribe. But I have sample code for WorkQueuePolicy push subscription, wish this will help you.
public static void subscribe(String streamName, String subjectKey,
String queueName, IMessageHandler iMessageHandler) throws IOException,
InterruptedException, JetStreamApiException {
long s = System.currentTimeMillis();
Connection nc = Nats.connect(options);
long e = System.currentTimeMillis();
logger.info("Nats Connect in " + (e - s) + " ms");
JetStream js = nc.jetStream();
Dispatcher disp = nc.createDispatcher();
MessageHandler handler = (msg) -> {
try {
iMessageHandler.onMessageReceived(msg);
} catch (Exception exc) {
msg.nak();
}
};
ConsumerConfiguration cc = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.maxDeliver(3)
.ackWait(Duration.ofMinutes(2))
.build();
PushSubscribeOptions so = PushSubscribeOptions.builder()
.stream(streamName)
.configuration(cc)
.build();
js.subscribe(subjectKey, disp, handler, false, so);
System.out.println("NatsUtil: " + durableName + "subscribe");
}
IMessageHandler is my custom interface to handle nats.io received messages.
First, configure the NATS connection. Here you will specify all your connection details like server address(es), authentication options, connection-level callbacks etc.
Connection natsConnection = Nats.connect(
new Options.Builder()
.server("nats://localhost:4222")
.connectionListener((connection, eventType) -> {})
.errorListener(new ErrorListener(){})
.build());
Then construct a JetStream instance
JetStream jetStream = natsConnection.jetStream();
Now you can subscribe to subjects. Note that JetStream consumers can be durable or ephemeral, can work according to push or pull logic. Please refer to NATS documentation (https://docs.nats.io/nats-concepts/jetstream/consumers) to make the appropriate choice for your specific use case. The following example constructs a durable push consumer:
//Subscribe to a subject.
String subject = "my-subject";
//queues are analogous to Kafka consumer groups, i.e. consumers belonging
//to the same queue (or, better to say, reading the same queue) will get
//only one instance of each message from the corresponding subject
//and only one of those consumers will be chosen to process the message
String queueName = "my-queue";
//Choosing delivery policy is analogous to setting the current offset
//in a partition for a consumer or consumer group in Kafka.
DeliverPolicy deliverPolicy = DeliverPolicy.New;
PushSubscribeOptions subscribeOptions = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.deliverPolicy(deliverPolicy)
.buildPushSubscribeOptions();
Subscription subscription = jetStream.subscribe(
subject,
queueName,
natsConnection.createDispatcher(),
natsMessage -> {
//This callback will be called for incoming messages
//asynchronously. Every subscription configured this
//way will be backed by its own thread, that will be
//used to call this callback.
},
true, //true if you want received messages to be acknowledged
//automatically, otherwise you will have to call
//natsMessage.ack() manually in the above callback function
subscribeOptions);
As for the declarative API (i.e. some form of #NatsListener annotation analogous to #KafkaListener from Spring for Apache Kafka project), there is none available out of the box in Spring. If you feel like you absolutely need it, you can write one yourself, if you are familiar with Spring BeanPostProcessor-s or other extension mechanism that can help to do that. Alternatively you can refer to 3rd party libs, it looks like a bunch of people (including myself) felt a bit uncomfortable when switching from Kafka to NATS, so they tried to bring the usual way of doing things with them from the Kafka world. Some examples can be found on github:
https://github.com/linux-china/nats-spring-boot-starter,
https://github.com/dstrelec/nats
https://github.com/amalnev/declarative-nats-listeners
There may be others.

Listen to another message only when I am done with my current message in Kafka

I am building a Springboot application using Spring Kafka where I am getting messages from a topic. I have to modify those messages and then produce them to another topic. I don't want to consume any other message till I have processed my current one. How can I achieve this?
#KafkaListener(
topics = "${event.topic.name}",
groupId = "${event.topic.group.id}",
containerFactory = "eventKafkaListenerContainerFactory"
)
public void consume(Event event) {
logger.info(String.format("Event created(from consumer)-> %s", event));
}
"event" is a json object which I am receiving as a message.
See https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#consumerconfigs_max.poll.records:
max.poll.records
The maximum number of records returned in a single call to poll().
Type: int
Default: 500
With Spring Boot you can configure it as this property:
spring.kafka.consumer.maxPollRecords
So, you set it to 1 and no more records are going to be polled from this consumer until you return from your #KafkaListener method.

Spring Integration (SFTP) message source isn't getting more than 1 file per poll despite setting to unlimited

I have following code to read xml files from a sftp server as InputStream:
#Configuration
public class SftpConfig {
...
#Bean
#InboundChannelAdapter(channel = "stream", poller = #Poller(fixedDelay="60000"))
public MessageSource<InputStream> messageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template());
messageSource.setRemoteDirectory(sftpProperties.getBaseDir());
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.xml"));
// messageSource.setMaxFetchSize(-1); no matter what i set this to, it only fetches one file
return messageSource;
}
#ServiceActivator(inputChannel = "stream", adviceChain = "after")
#Bean
public MessageHandler handle() {
return message -> {
Assert.isTrue(message.getPayload() instanceof InputStream, "Payload must be of type $InputStream");
String filename = (String) message.getHeaders().get(FileHeaders.REMOTE_FILE);
InputStream is = (InputStream) message.getPayload();
log.info("I am here"); // each poll only prints this once
};
}
...
}
When I debugged or checked the logs for MessageHanlder$handleMessage, I continuously only saw one message (file object) came through. And there are more than one .xml file sitting on the sftp server as I could verify by seeing file coming through in the next poll. The documentation says
/**
* Set the maximum number of objects the source should fetch if it is necessary to
* fetch objects. Setting the
* maxFetchSize to 0 disables remote fetching, a negative value indicates no limit.
* #param maxFetchSize the max fetch size; a negative value means unlimited.
*/
void setMaxFetchSize(int maxFetchSize);
So that I fiddled with different numbers but to no avail. What am I missing here?
Sorry for misleading, but fetch doesn't mean poll. The fetch options just take as many remote entities to the local cache on a first poll and every single subsequent polls just take entries from that cache until it is exhausted.
The option about max messages per poll belongs to that #Poller configuration. See a respective option:
/**
* #return The maximum number of messages to receive for each poll.
* Can be specified as 'property placeholder', e.g. {#code ${poller.maxMessagesPerPoll}}.
* Defaults to -1 (infinity) for polling consumers and 1 for polling inbound channel adapters.
*/
String maxMessagesPerPoll() default "";
Pay attention to that 1 for polling inbound channel adapters. That's how you see only one message coming through.
Nevertheless the logic is like push only one message to the channel. There is no batching for how many files you have a the moment. Independently of fetch perPoll only one message is sent to the channel. Although I agree that with infinite perPoll all the messages are sent in the same thread and during the same poll cycle.

JmsTemplate's browseSelected not retrieving all messages

I have some Java code that reads messages from an ActiveMQ queue. The code uses a JmsTemplate from Spring and I use the "browseSelected" method to retrieve any messages from the queue that have a timestamp in their header older than 7 days (by creating the appropriate criteria as part of the messageSelector parameter).
myJmsTemplate.browseSelected(myQueue, myCriteria, new BrowserCallback<Integer>() {
#Override
public Integer doInJms(Session s, QueueBrowser qb) throws JMSException {
#SuppressWarnings("unchecked")
final Enumeration<Message> e = qb.getEnumeration();
int count = 0;
while (e.hasMoreElements()) {
final Message m = e.nextElement();
final TextMessage tm = (TextMessage) MyClass.this.jmsQueueTemplate.receiveSelected(
MyClass.this.myQueue, "JMSMessageID = '" + m.getJMSMessageID() + "'");
myMessages.add(tm);
count++;
}
return count;
}
});
The BrowserCallback's "doInJms" method adds the messages which match the criteria to a list ("myMessages") which subsequently get processed further.
The issue is that I'm finding the code will only process 400 messages each time it runs, even though there are several thousand messages which match the criteria specified.
When I previously used another queueing technology with this code (IBM MQ), it would process all records which met the criteria.
I'm wondering whether I'm experiencing an issue with ActiveMQ's prefetch limit: http://activemq.apache.org/what-is-the-prefetch-limit-for.html
Versions: ActiveMQ 5.10.1 and Spring 3.2.2.
Thanks in advance for any assistance.
The broker will only return up to 400 message by default as configured by the maxBrowsePageSize option in the destination policies. You can increase that value but must use caution as the messages are paged into memory and as such can lead you into an OOM situation.
You must always remember that a message broker is not a database, using it as one will generally end in tears.

Request-response pattern using Spring amqp library

everyone. I have an HTTP API for posting messages in a RabbitMQ broker and I need to implement the request-response pattern in order to receive the responses from the server. So I am something like a bridge between the clients and the server. I push the messages to the broker with specific routing-key and there is a Consumer for that messages, which is publishing back massages as response and my API must consume the response for every request. So the diagram is something like this:
So what I do is the following- For every HTTP session I create a temporary responseQueue(which is bound to the default exchange, with routing key the name of that queue), after that I set the replyTo header of the message to be the name of the response queue(where I will wait for the response) and also set the template replyQueue to that queue. Here is my code:
public void sendMessage(AbstractEvent objectToSend, final String routingKey) {
final Queue responseQueue = rabbitAdmin.declareQueue();
byte[] messageAsBytes = null;
try {
messageAsBytes = new ObjectMapper().writeValueAsBytes(objectToSend);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
MessageProperties properties = new MessageProperties();
properties.setHeader("ContentType", MessageBodyFormat.JSON);
properties.setReplyTo(responseQueue.getName());
requestTemplate.setReplyQueue(responseQueue);
Message message = new Message(messageAsBytes, properties);
Message receivedMessage = (Message)requestTemplate.convertSendAndReceive(routingKey, message);
}
So what is the problem: The message is sent, after that it is consumed by the Consumer and its response is correctly sent to the right queue, but for some reason it is not taken back in the convertSendAndReceived method and after the set timeout my receivedMessage is null. So I tried to do several things- I started to inspect the spring code(by the way it's a real nightmare to do that) and saw that is I don't declare the response queue it creates a temporal for me, and the replyTo header is set to the name of the queue(the same what I do). The result was the same- the receivedMessage is still null. After that I decided to use another template which uses the default exchange, because the responseQueue is bound to that exchange:
requestTemplate.send(routingKey, message);
Message receivedMessage = receivingTemplate.receive(responseQueue.getName());
The result was the same- the responseMessage is still null.
The versions of the amqp and rabbit are respectively 1.2.1 and 1.2.0. So I am sure that I miss something, but I don't know what is it, so if someone can help me I would be extremely grateful.
1> It's strange that RabbitTemplate uses doSendAndReceiveWithFixed if you provide the requestTemplate.setReplyQueue(responseQueue). Looks like it is false in your explanation.
2> To make it worked with fixed ReplyQueue you should configure a reply ListenerContainer:
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(rabbitConnectionFactory);
container.setQueues(responseQueue);
container.setMessageListener(requestTemplate);
3> But the most important part here is around correlation. The RabbitTemplate.sendAndReceive populates correlationId message property, but the consumer side has to get deal with it, too: it's not enough just to send reply to the responseQueue, the reply message should has the same correlationId property. See here: how to send response from consumer to producer to the particular request using Spring AMQP?
BTW there is no reason to populate the Message manually: You can just simply support Jackson2JsonMessageConverter to the RabbitTemplate and it will convert your objectToSend to the JSON bytes automatically with appropriate headers.

Resources