Listener for NATS JetStream - spring-boot

Can some one help how to configure NATS jet stream subscription in spring boot asynchronously example: looking for an equivalent annotation like #kafkalistener for Nats jetstream
I am able to pull the messages using endpoint but however when tried to pull messages using pushSubscription dispatcherhandler is not invoked. Need to know how to make the listener to be active and consume messages immediately once the messages are published to the subject.
Any insights /examples regarding this will be helpful, thanks in advance.

I don't know what is your JetStream retention policy, neither the way you want to subscribe. But I have sample code for WorkQueuePolicy push subscription, wish this will help you.
public static void subscribe(String streamName, String subjectKey,
String queueName, IMessageHandler iMessageHandler) throws IOException,
InterruptedException, JetStreamApiException {
long s = System.currentTimeMillis();
Connection nc = Nats.connect(options);
long e = System.currentTimeMillis();
logger.info("Nats Connect in " + (e - s) + " ms");
JetStream js = nc.jetStream();
Dispatcher disp = nc.createDispatcher();
MessageHandler handler = (msg) -> {
try {
iMessageHandler.onMessageReceived(msg);
} catch (Exception exc) {
msg.nak();
}
};
ConsumerConfiguration cc = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.maxDeliver(3)
.ackWait(Duration.ofMinutes(2))
.build();
PushSubscribeOptions so = PushSubscribeOptions.builder()
.stream(streamName)
.configuration(cc)
.build();
js.subscribe(subjectKey, disp, handler, false, so);
System.out.println("NatsUtil: " + durableName + "subscribe");
}
IMessageHandler is my custom interface to handle nats.io received messages.

First, configure the NATS connection. Here you will specify all your connection details like server address(es), authentication options, connection-level callbacks etc.
Connection natsConnection = Nats.connect(
new Options.Builder()
.server("nats://localhost:4222")
.connectionListener((connection, eventType) -> {})
.errorListener(new ErrorListener(){})
.build());
Then construct a JetStream instance
JetStream jetStream = natsConnection.jetStream();
Now you can subscribe to subjects. Note that JetStream consumers can be durable or ephemeral, can work according to push or pull logic. Please refer to NATS documentation (https://docs.nats.io/nats-concepts/jetstream/consumers) to make the appropriate choice for your specific use case. The following example constructs a durable push consumer:
//Subscribe to a subject.
String subject = "my-subject";
//queues are analogous to Kafka consumer groups, i.e. consumers belonging
//to the same queue (or, better to say, reading the same queue) will get
//only one instance of each message from the corresponding subject
//and only one of those consumers will be chosen to process the message
String queueName = "my-queue";
//Choosing delivery policy is analogous to setting the current offset
//in a partition for a consumer or consumer group in Kafka.
DeliverPolicy deliverPolicy = DeliverPolicy.New;
PushSubscribeOptions subscribeOptions = ConsumerConfiguration.builder()
.durable(queueName)
.deliverGroup(queueName)
.deliverPolicy(deliverPolicy)
.buildPushSubscribeOptions();
Subscription subscription = jetStream.subscribe(
subject,
queueName,
natsConnection.createDispatcher(),
natsMessage -> {
//This callback will be called for incoming messages
//asynchronously. Every subscription configured this
//way will be backed by its own thread, that will be
//used to call this callback.
},
true, //true if you want received messages to be acknowledged
//automatically, otherwise you will have to call
//natsMessage.ack() manually in the above callback function
subscribeOptions);
As for the declarative API (i.e. some form of #NatsListener annotation analogous to #KafkaListener from Spring for Apache Kafka project), there is none available out of the box in Spring. If you feel like you absolutely need it, you can write one yourself, if you are familiar with Spring BeanPostProcessor-s or other extension mechanism that can help to do that. Alternatively you can refer to 3rd party libs, it looks like a bunch of people (including myself) felt a bit uncomfortable when switching from Kafka to NATS, so they tried to bring the usual way of doing things with them from the Kafka world. Some examples can be found on github:
https://github.com/linux-china/nats-spring-boot-starter,
https://github.com/dstrelec/nats
https://github.com/amalnev/declarative-nats-listeners
There may be others.

Related

Spring-Boot - using AWS SQS in a synchronic way

I have a pub/sub scenario, where I create and save to DB something in one service, publish it to a SNS topic, subscribe with SQS listener, and handle the message and save it to DB in another service. So far so good.
In one of the scenarios I create a user and subscribe it to a site. Then I send the new user to its topic, the user-site relation to another topic, and the subscribed service updates its own DB tables.
private void publishNewUserNotifications(UserEntity userEntity, List<SiteEntity> sitesToAssociateWithUser) {
iPublisherService.publishNewUserNotification(userEntity);
if (sitesToAssociateWithUser != null || !sitesToAssociateWithUser.isEmpty()) {
List<String> sitesIds = sitesToAssociateWithUser.stream().map(SiteEntity::getSiteId).collect(Collectors.toList());
iPublisherService.publishSitesToUserAssignment(userEntity.getId(), new ArrayList<>(), sitesIds);
}
}
The problem is that sometimes I have a thread race and handle the user-site relation before I created the user in the second service, get an empty result from DB when loading the User object, and fail to handle the user-site relation.
#Override
#Transactional
public void handle(UsersSitesListNotification message) {
UsersSitesNotification assigned = message.getAssigned();
List<UserEntity> userEntities = iUserRepository.findAllByUserIdIn(CollectionUtils.union(assigned.getUserIds()));
List<SiteEntity> siteEntities = iSiteRepository.findAllByIdIn(CollectionUtils.union(assigned.getSiteIds()));
List<UserSiteAssignmentEntity> assignedEntities = fromUsersSitesNotificationToUserSiteAssignmentEntities(assigned, userEntities, siteEntities);
Iterable<UserSiteAssignmentEntity> saved = iUserSiteAssignmentRepository.saveAll(assignedEntities);
}
Because of that, I consider using SQS in a synchronic way. The problem is that in order to use SQS I need to import the "spring-cloud-aws-messaging" package, and the SQS configuration inside it uses the Async client.
Is there a way to use SQS in a synchronic way? What should I change? How should I override the Async configuration that I need in the package/get some other package?
Any idea will help, tnx.

How Do I Connect Stomp Client to An ActiveMQ Artemis Destination Created Using JMS(Spring Boot)?

CONTEXT
I am trying to learn about SpringJMS and MOMS and I am using ActiveMQ Artemis for this. I created a Queue Destination address using the jakarta.jms.* API, and managed to send some message to the queue like this:
public void createUserDestination(String userId) throws JMSException {
queueDestination = setupConnection().createQueue("user" + userId);
producer = session.createProducer(queueDestination);
producer.setDeliveryMode(DeliveryMode.PERSISTENT);
producer.send(session.createTextMessage("Testing queue availability"));
connection.close();
log.info("successfully created group... going back to controller");
}
So for example, if I pass an ID of user12345abc, I get a Queue Address user12345abc, of the routing type ANYCAST with one queue underneath(with that same address) with my message placed there.
PROBLEM
Now, I wanted to write a simple web front-end with STOMP that can connect to this queue. But I have been having a ton of problems connecting to that queue address because each time I try to connect by providing the destination address, it creates a new address in the MOM and connects to that instead.
My STOMP code looks like this(the first argument is the destination address, you can ignore the rest of the code):
stompClient.subscribe("jms.queue.user12345abc", (message) => {
receivedMessages.value.push(message.body);
});
In this case, completely brand new queue is created with the address jms.queue.user12345abc which is not what I want at all.
I configured my Spring Backend to use an external MOM broker like this(I know this is important):
public void configureMessageBroker(MessageBrokerRegistry registry) {
// these two end points are prefixes for where the messages are pushed to
registry.enableStompBrokerRelay("jms.topic", "jms.queue")
.setRelayHost("127.0.0.1")
.setRelayPort(61613)
.setSystemLogin(brokerUsername)
.setSystemPasscode(brokerPassword)
.setClientLogin(brokerUsername)
.setClientPasscode(brokerPassword);
// this prefixes the end points where clients send messages
registry.setApplicationDestinationPrefixes("/app", "jms.topic", "jms.queue");
// this prefixes the end points where the user's subscribe to
registry.setUserDestinationPrefix("/user");
}
But it's still not working as I expect it to. Am I getting some concept wrong here? How do I use STOMP to connect to that queue I created earlier with JMS?
It's not clear why you are using the jms.queue and jms.topic prefixes. Those are similar but not quite the same as the jms.queue. and jms.topic. prefixes which were used way back in ActiveMQ Artemis 1.x (whose last release was in early 2018 almost 5 years ago now).
In any case, I recommend you use the more widely adopted /queue/ and /topic/, e.g.:
public void configureMessageBroker(MessageBrokerRegistry registry) {
// these two end points are prefixes for where the messages are pushed to
registry.enableStompBrokerRelay("/topic/", "/queue/")
.setRelayHost("127.0.0.1")
.setRelayPort(61613)
.setSystemLogin(brokerUsername)
.setSystemPasscode(brokerPassword)
.setClientLogin(brokerUsername)
.setClientPasscode(brokerPassword);
// this prefixes the end points where clients send messages
registry.setApplicationDestinationPrefixes("/app", "/topic/", "/queue/");
// this prefixes the end points where the user's subscribe to
registry.setUserDestinationPrefix("/user");
}
The in broker.xml you'd need to add the corresponding anycastPrefix and multicastPrefix values on the STOMP acceptor, e.g.:
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;anycastPrefix=/queue/;multicastPrefix=/topic/</acceptor>
To be clear, your JMS code will stay the same, but your STOMP consumer would be something like:
stompClient.subscribe("/queue/user12345abc", (message) => {
receivedMessages.value.push(message.body);
});

Spring AMQP AsyncRabbitTemplate Doesn't Send Message In Delay Time

I'm trying to send delayed messages on RabbitMQ with Spring AMQP.
I'm defining MessageProperties like this:
MessageProperties delayedMessageProperties = new MessageProperties();
delayedMessageProperties.setDelay(45000);
I'm defining the message which should be send in delay time like this:
org.springframework.amqp.core.Message amqpDelayedMessage = org.springframework.amqp.core.MessageBuilder.withBody(objectMapper.writeValueAsString(reversalMessage).getBytes())
.andProperties(reversalMessageProperties).build();
And then, If I send this message with RabbitTemplate, there is no problem. Message is being sent in defined delay time.
rabbitTemplate.convertSendAndReceiveAsType("delay-exchange",delayQueue, amqpDelayedMessage, new ParameterizedTypeReference<org.springframework.amqp.core.Message>() {
});
But I need to send this message asynchronously because I need not to block any other message in the system and to get more performance and if I use asyncRabbitTemplate, message is being delivered immediately. There is no delay.
asyncRabbitTemplate.convertSendAndReceiveAsType("delay-exchange",delayQueue, amqpDelayedMessage, new ParameterizedTypeReference<org.springframework.amqp.core.Message>() {
});
How can I obtain the delay with asnycRabbitTemplate?
This is probably a bug; please open an issue on GitHub.
The convertSendAndReceive() methods are not intended to send and receive raw Message objects.
In the case of the RabbitTemplate the conversion is skipped if the object is already a Message; there are some cases where this skip is not performed with the async template; please edit the question to show your template configuration.
However, since you are dealing with Message directly, don't use the convert... methods at all, simply use
public RabbitMessageFuture sendAndReceive(String exchange, String routingKey, Message message) {

How to consume message from RabbitMQ dead letter queue one by one

The requirement is like to process the messages from dead letter queue by exposed a REST service API(Spring Boot).
So that once REST service is called, one message will be consumed from the DL queue and will publish in the main queue again for processing.
#RabbitListener(queues = "QUEUE_NAME") consumes the message immediately which is not required as per the scenario. The message only has to be consumed by the REST service API.
Any suggestion or solution?
I do not think RabbitListener will help here.
However you could implement this behaviour manually.
Spring Boot automatically creates RabbitMq connection factory so you could use it. When http call is made just read single message from the queue manually, you could use basic.get to synchronously get just one message:
#Autowire
private ConnectionFactory factory
void readSingleMessage() {
Connection connection = null;
Channel channel = null;
try {
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, true, false, false, null);
GetResponse response = channel.basicGet(QUEUE_NAME, true);
if (response != null) {
//Do something with the message
}
} finally {
//Check if not null
channel.close();
connection.close();
}
}
If you are using Spring; you can avoid all the boilerplate in the other answer using RabbitTemplate.receive(...).
EDIT
To manually ack/reject the message, use the execute method instead.
template.execute(channel -> {
GetResponse got = channel.basicGet("foo", false);
// ...
channel.basicAck(got.getEnvelope().getDeliveryTag(), false);
return null;
});
It's a bit lower level, but again, most of the boilerplate is taken care of for you.

Request-response pattern using Spring amqp library

everyone. I have an HTTP API for posting messages in a RabbitMQ broker and I need to implement the request-response pattern in order to receive the responses from the server. So I am something like a bridge between the clients and the server. I push the messages to the broker with specific routing-key and there is a Consumer for that messages, which is publishing back massages as response and my API must consume the response for every request. So the diagram is something like this:
So what I do is the following- For every HTTP session I create a temporary responseQueue(which is bound to the default exchange, with routing key the name of that queue), after that I set the replyTo header of the message to be the name of the response queue(where I will wait for the response) and also set the template replyQueue to that queue. Here is my code:
public void sendMessage(AbstractEvent objectToSend, final String routingKey) {
final Queue responseQueue = rabbitAdmin.declareQueue();
byte[] messageAsBytes = null;
try {
messageAsBytes = new ObjectMapper().writeValueAsBytes(objectToSend);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
MessageProperties properties = new MessageProperties();
properties.setHeader("ContentType", MessageBodyFormat.JSON);
properties.setReplyTo(responseQueue.getName());
requestTemplate.setReplyQueue(responseQueue);
Message message = new Message(messageAsBytes, properties);
Message receivedMessage = (Message)requestTemplate.convertSendAndReceive(routingKey, message);
}
So what is the problem: The message is sent, after that it is consumed by the Consumer and its response is correctly sent to the right queue, but for some reason it is not taken back in the convertSendAndReceived method and after the set timeout my receivedMessage is null. So I tried to do several things- I started to inspect the spring code(by the way it's a real nightmare to do that) and saw that is I don't declare the response queue it creates a temporal for me, and the replyTo header is set to the name of the queue(the same what I do). The result was the same- the receivedMessage is still null. After that I decided to use another template which uses the default exchange, because the responseQueue is bound to that exchange:
requestTemplate.send(routingKey, message);
Message receivedMessage = receivingTemplate.receive(responseQueue.getName());
The result was the same- the responseMessage is still null.
The versions of the amqp and rabbit are respectively 1.2.1 and 1.2.0. So I am sure that I miss something, but I don't know what is it, so if someone can help me I would be extremely grateful.
1> It's strange that RabbitTemplate uses doSendAndReceiveWithFixed if you provide the requestTemplate.setReplyQueue(responseQueue). Looks like it is false in your explanation.
2> To make it worked with fixed ReplyQueue you should configure a reply ListenerContainer:
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(rabbitConnectionFactory);
container.setQueues(responseQueue);
container.setMessageListener(requestTemplate);
3> But the most important part here is around correlation. The RabbitTemplate.sendAndReceive populates correlationId message property, but the consumer side has to get deal with it, too: it's not enough just to send reply to the responseQueue, the reply message should has the same correlationId property. See here: how to send response from consumer to producer to the particular request using Spring AMQP?
BTW there is no reason to populate the Message manually: You can just simply support Jackson2JsonMessageConverter to the RabbitTemplate and it will convert your objectToSend to the JSON bytes automatically with appropriate headers.

Resources