Is there any possibility to read topic from JMS message as a consumer - jms

Summary:
I have created a small Spring Boot application which should consume messages from a Solace instance. In Solace the publisher has maintained a queue which is subscribed to different topics.
I, as consumer, am processing the messages provided in the queue. Depending on the original topic leading to a message in this queue I would like to react differently in my business logic.
Means I need to somehow extract the topic of the message provided via the solace queue.
I have already have checked JMS header/properties, but I found nothing related to topic.
Anyone any idea or had a similar use case?
A workaround which came to my mind was to directly subscribe to all topics and create for every topic a method to consume and react accordingly, but then we would miss the queue features, or?

Actually the Destination and JMSDestination headers should contain the topic that the message was published to.
For example, to test this real quick I created a StackOverflowQueue queue with a topic subscription to the this/is/a/topic topic. And upon publishing a message to this/is/a/topic my consumer, which was listening to the queue, got the topic info in the header.
To quickly test I used the JMS sample here: https://github.com/SolaceSamples/solace-samples-jms/blob/master/src/main/java/com/solace/samples/QueueConsumer.java
Awaiting message...
Message received.
Message Content:
JMSDeliveryMode: 2
JMSDestination: Topic 'this/is/a/topic'
JMSExpiration: 0
JMSPriority: 0
JMSDeliveryCount: 1
JMSTimestamp: 1665667425784
JMSProperties: {JMS_Solace_DeliverToOne:false,JMS_Solace_DeadMsgQueueEligible:false,JMS_Solace_ElidingEligible:false,Solace_JMS_Prop_IS_Reply_Message:false,JMS_Solace_SenderId:Try-Me-Pub/solclientjs/chrome-105.0.0-OSX-10.15.7/3410903749/0001,JMSXDeliveryCount:1}
Destination: Topic 'this/is/a/topic'
SenderId: Try-Me-Pub/solclientjs/chrome-105.0.0-OSX-10.15.7/3410903749/0001
SendTimestamp: 1665667425784 (Thu. Oct. 13 2022 09:23:45.784)
Class Of Service: USER_COS_1
DeliveryMode: PERSISTENT
Message Id: 1
Replication Group Message ID: rmid1:18874-bc0e45b2aa1-00000000-00000001
Binary Attachment: len=12
48 65 6c 6c 6f 20 77 6f 72 6c 64 21 Hello.world!
Note that sample code doesn't use Spring Boot, but that shouldn't make a difference.

Related

How to list the messages of a nats jetstream and know if they were acknowledged?

I need to list the messages that were posted in nats stream to know which ones were not recognized.
I have tried to look at the admin api that nats suggests in its documentation, but it does not specify if this can be done or not.
I have also looked at the jetstream library for go, with this I can get general information about the streams and their comsumers but not the messages that were not acknowledged and I don't see any functions that give me what I need.
Has anyone already done this no matter the programming language?
Acknowledgements are tied to a specific consumer, not a stream.
You can derive the state of acknowledgements from consumer info, precisely, the Acknowledgement floor:
nats consumer info
State:
Last Delivered Message: Consumer sequence: 8 Stream sequence: 158 Last delivery: 13m59s ago
Acknowledgment floor: Consumer sequence: 4 Stream sequence: 154 Last Ack: 13m59s ago
Outstanding Acks: 2 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 42
Waiting Pulls: 0 of maximum 512
Which is available in NATS CLI and most client libraries.
There is no way to directly see the list of acknowledged messages.

Spring cloud stream - Kafka consumer consuming duplicate messages with StreamListener

With our spring boot app, we notice kafka consumer consuming message twice randomly once in a while only in prod env. We have 6 instances with 6 partitions deployed in PCF.We have caught messages with same offset and partition received twice in same topic which causes duplicates and is a business critical for us.
We haven't noticed this in non production env and it is hard to reproduce in non prod env. We have recently switched to Kafka and we are not able to find out the root issue.
We are using spring-cloud-stream/spring-cloud-stream-binder-kafka- 2.1.2
Here is the Config:
spring:
cloud:
stream:
default.consumer.concurrency: 1
default-binder: kafka
bindings:
channel:
destination: topic
content_type: application/json
autoCreateTopics: false
group: group
consumer:
maxAttempts: 1
kafka:
binder:
autoCreateTopics: false
autoAddPartitions: false
brokers: brokers list
bindings:
channel:
consumer:
autoCommitOnError: true
autoCommitOffset: true
configuration:
max.poll.interval.ms: 1000000
max.poll.records: 1
group.id: group
We use #Streamlisteners to consume the messages.
Here is the instance we received duplicate and the error message captured in server logs.
ERROR 46 --- [container-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator
: [Consumer clientId=consumer-3, groupId=group] Offset commit failed
on partition topic-0 at offset 1291358: The coordinator is not aware
of this member. ERROR 46 --- [container-0-C-1]
o.s.kafka.listener.LoggingErrorHandler : Error while processing:
null OUT org.apache.kafka.clients.consumer.CommitFailedException:
Commit cannot be completed since the group has already rebalanced and
assigned the partitions to another member. This means that the time
between subsequent calls to poll() was longer than the configured
max.poll.interval.ms, which typically implies that the poll loop is
spending too much time message processing. You can address this
either by increasing the session timeout or by reducing the maximum
size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:871)
~[kafka-clients-2.0.1.jar!/:na]
There is no crash and all the instances are healthy at the time of duplicate. Also there is confusion with error log - Error while processing: null since Message was successfully processed twice. And max.poll.interval.ms: 100000 which is about 16 minutes and it is supposed to be enough time to process any message for the system and session timeout and heartbit config is default. Duplicate is received within 2 seconds in most of the instances.
Any configs that we are missing ? Any suggestion/help is highly appreciated.
Commit cannot be completed since the group has already rebalanced
A rebalance occurred because your listener took too long; you should adjust max.poll.records and max.poll.interval.ms to make sure you can always handle the records received within the timelimit.
In any case, Kafka does not guarantee exactly once delivery, only at least once delivery. You need to add idempotency to your application and detect/ignore duplicates.
Also, keep in mind StreamListener and the annotation-based programming model has been deprecated for 3+ years and has been removed from the current main, which means the next release will not have it. Please migrate your solution to a functional based programming model

Spring Kafka - increase in concurrency produces more duplicate

Topic A created with 12 partition ,
And spring Kafka consumer started with 10 concurrency as spring boot application. While putting 1000 message into topic , there is no issues with duplicate as all 1000 messages got consumed but on pushing more load 10K messages 100TPS , in consumer end received 10K + 8.5K messages as duplicate with 10 concurrency but for one concurrency it's working fine ( No duplicates found )
Enable auto commit is false and doing manual ack after processing the message .
Processing time for one message is 300 milli second
Consumer rebalance occuring because of that duplicates got produced.
How to overcome this situation when we are handling more message in Kafka ?

Spring kafka consumer stops receiving message

I have a spring microservice using kafka.
Here are consumer 5 config properties :
BOOTSTRAP_SERVERS_CONFIG -> <ip>:9092
KEY_DESERIALIZER_CLASS_CONFIG -> StringDeserializer.class
VALUE_DESERIALIZER_CLASS_CONFIG -> StringDeserializer.class
GROUP_ID_CONFIG -> "Group1"
MAX_POLL_INTERVAL_MS_CONFIG -> Integer.INT_MAX
It has been observed that when microservice is restarted , then kafka consumer stops receiving messages. Please help me in this.
I believe your max.poll.interval.ms is the issue. It is set to 24days!! This represents the time the consumer is given to process the message. The the broker will hang for that long when the processing thread dies! Try setting it to a smaller value than Integer.INT_MAX, for example 30 seconds 30000ms.

Filtering Elastic Beanstalk SNS topic

Currently I have an SNS topic for my ElasticBeanstalk instance. Deployments and health status transitions are posted to this topic, arn:aws:sns:us-east-1:309321511178:ElasticBeanstalkNotifications-Environment-Myapp.
Next a lambda function subscribes to the topic and posts to a slack channel.
However, I'd like to filter these messages to only deployments and transitions to Severe status.
I guess the filter policy of the SNS topic would be the way to do this, but I'm not sure exactly what JSON would be needed to get the results I desire.
You can set monitoring in EB with a threshold "Maximum Environment Health">=20.
Below are the values for different status:
0 (Ok), 1 (Info), 5 (Unknown), 10 (No data), 15 (Warning), 20 (Degraded), 25 (Severe)

Resources