rabbitmq can not get current message count sometimes - spring-rabbit

there are some messages in the rabbitmq queue, but sometimes I can't get current message count in the queue, I always get 0 count between the 2 am and 6 am,but after that it is back to normal
code picture
log picture
rabbit mq can not get current message counts

Related

How to list the messages of a nats jetstream and know if they were acknowledged?

I need to list the messages that were posted in nats stream to know which ones were not recognized.
I have tried to look at the admin api that nats suggests in its documentation, but it does not specify if this can be done or not.
I have also looked at the jetstream library for go, with this I can get general information about the streams and their comsumers but not the messages that were not acknowledged and I don't see any functions that give me what I need.
Has anyone already done this no matter the programming language?
Acknowledgements are tied to a specific consumer, not a stream.
You can derive the state of acknowledgements from consumer info, precisely, the Acknowledgement floor:
nats consumer info
State:
Last Delivered Message: Consumer sequence: 8 Stream sequence: 158 Last delivery: 13m59s ago
Acknowledgment floor: Consumer sequence: 4 Stream sequence: 154 Last Ack: 13m59s ago
Outstanding Acks: 2 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 42
Waiting Pulls: 0 of maximum 512
Which is available in NATS CLI and most client libraries.
There is no way to directly see the list of acknowledged messages.

How to limit Message consumption rate of Kafka Consumer in SpringBoot? (Kafka Stream)

I want to limit my Kafka Consumer message consumption rate to 1 Message per 10 seconds .I'm using kafka streams in Spring boot .
Following is the property I tried to Make this work but it didn't worked out s expected(Consumed many messages at once).
config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, brokersUrl);
config.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
//
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,1);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 10000);
is there any way to Manually ACK(Manual offsetCommits) in KafkaStreams? which will be usefull to control the msg consumption rate .
Please note that i'm using Kstreams(KafkaStreams)
Any help is really appreciated . :)
I think you misunderstand what MAX_POLL_INTERVAL_MS_CONFIG actually does.
That is the max allowed time for the client to read an event.
From docs
controls the maximum time between poll invocations before the consumer will proactively leave the group (5 minutes by default). The value of the configuration request.timeout.ms (default to 30 seconds) must always be smaller than max.poll.interval.ms(default to 5 minutes), since that is the maximum time that a JoinGroup request can block on the server while the consumer is rebalance
"maximum time" not saying any "delay" between poll invocations.
Kafka Streams will constantly poll; you cannot easily pause/start it and delay record polling.
To read an event every 10 seconds without losing consumers in the group due to lost heartbeats, then you should use Consumer API, with pause() method, call Thread.sleep(Duration.ofSeconds(10)), then resume() + poll() while setting max.poll.records=1
Finally ,I achieved the desired message consuming limit using Thread.sleep().
Since , there is no way to control the message consumption rate using kafka config properties itself . I had to use my application code to control the rate of consumption .
Example: if I want control the record consumption rate say 4 msg per 10 seconds . Then i will just consumer 4 msg (will keep a count parallely) once 4 records are consumer then i will make the thread sleep for 10 seconds and will repeat the same process over again .
I know it's not a good solution but there was no other way.
thank you OneCricketeer

Spring Kafka - increase in concurrency produces more duplicate

Topic A created with 12 partition ,
And spring Kafka consumer started with 10 concurrency as spring boot application. While putting 1000 message into topic , there is no issues with duplicate as all 1000 messages got consumed but on pushing more load 10K messages 100TPS , in consumer end received 10K + 8.5K messages as duplicate with 10 concurrency but for one concurrency it's working fine ( No duplicates found )
Enable auto commit is false and doing manual ack after processing the message .
Processing time for one message is 300 milli second
Consumer rebalance occuring because of that duplicates got produced.
How to overcome this situation when we are handling more message in Kafka ?

KafkaConsumer poll() behavior understanding

Trying to understand (new to kafka)how the poll event loop in kafka works.
Use Case : 25 records on the topic, max poll size is set to 5.
max.poll.interval.ms = 5000 //5 seconds by default max.poll.records = 5
Sequence of tasks
Poll the records from the topic.
Process the records in a for loop.
Some processing login where the logic would either pass or fail.
If logic passes (with offset) will be added to a map.
Then it will be committed using commitSync call.
If fails then the loop will break and whatever was success before this would be committed.The problem starts after this.
The next poll would just keep moving in batches of 5 even after error, is it expected?
What we basically expect is that the loop breaks and the offsets till success process message logic should get committed, then the next poll should continue from the failed message.
Example, 1st batch of poll 5 messages polled and 1,2 offsets successful and committed then 3rd failed.So the poll call keep moving to next batch like 5-10,10-15 if there are any errors in between we expect it to stop at that point and poll should start from 3 in first case or if it fails in 2nd batch at 8 then the next poll should start from 8th offset not from next max poll batch settings which would be like 5 in this case.IF IT MATTERS USING SPRING BOOT PROJECT and enable autocommit is false.
I have tried finding this in documentation but no help.
tried tweaking this but no help max.poll.interval.ms
EDIT: Not accepted answer because there is no direct solution for a customer consumer.Keeping this for informational purpose
max.poll.interval.ms is milliseconds, not seconds so it should be 5000.
Once the records have been returned by the poll (and offsets not committed), they won't be returned again unless you restart the consumer or perform seek() operations on the consumer to reset the offset to the unprocessed ones.
The Spring for Apache Kafka project provides a SeekToCurrentErrorHandler to perform this task for you.
If you are using the consumer yourself (which it sounds like), you must do the seeks.
You can manually seek to the beginning offset of the poll for all the assigned partitions on failure. I am not sure using spring consumer.
Sample code for seeking offset to beginning for normal consumer.
In the code below I am getting the records list per partition and then getting the offset of the first record to seek to.
def seekBack(records: ConsumerRecords[String, String]) = {
records.partitions().map(partition => {
val partitionedRecords = records.records(partition)
val offset = partitionedRecords.get(0).offset()
consumer.seek(partition, offset)
})
}
One problem doing this in production is bad since you don't want seekback all the time only in cases where you have a transient error otherwise you will end up retrying infinitely.

How to set consumer to start from a specific offset in Golang Kafka 10

My need is to make the producer to start from the last message it processed before it crashed. Fortunately I am in the case of having only one topic, with one partition and one consumer.
To do so I tried https://github.com/Shopify/sarama but it doesn't seems to be available yet.
I am now using https://godoc.org/github.com/bsm/sarama-cluster, which allow me to commit every message offset.
I cannot retrieve the last committed offset
I cannot figure out how to make a sarama consumer to start from said offset. The only parameter I've found so far is Config.Producer.Offsets.Initial.
How to retrieve the last committed offset?
How to make the consumer start from the last message whose offset has been committed? OffsetNewest will make it start from the last message produced, not the last processed b the consumer.
Is it possible to do so using only Shopify/sarama and not bsm/sarama-cluster ?
Thank in advance
P.S. I am using Kafka 10.0, so the offsets are stores in a kafka and not in zookeeper.
EDIT1:
Partial solution: fetch all the messages since sarama.OffsetOldest and skip all of them until we found a non processed one.
If offset was already saved for a partition, sarama-cluster will resume consumption from that offset. The Config.Producer.Offsets.Initial option is used only if no saved offset is present (first run for a consumer group).
You can verify this by adding the following line at the beginning of your main() function:
sarama.Logger = log.New(os.Stdout, "sarama: ", log.LstdFlags)
Then you'll see something like the following in the output:
cluster/consumer CID-17db1be4-a162-411c-a106-4d198191176a consume sample/0 from 12
The 12 in that is the offset Sarama is going to start from for that partition (sample/0).

Resources