Got error "CODE: 22 Not found, V3_0_6_SNAPSHOT maybe this group consumer boot first" when a consumer group boot,RocketMQ - rocketmq

version: 3.2.6
consumer type: PullConsumer
When a new consumer boots, I will try to fetch the consumer offset from mq:
long offset = pullConsumer.fetchConsumeOffset(mq, true) ;
But I happen to meet that this returns -1, and I saw error:
CODE: 22 Not found, V3_0_6_SNAPSHOT maybe this group consumer boot first
from error log.

This only happens when a totally new consumer group boots with EITHER one of below condition happens:
min offset >0 , indicating that the topic is an old topic/queue, which messages has been deleted before from this queue.
Message which consume offset is 0 is considered that it will be consumed from disk by checkInDiskByCommitOffset, which rocketmq think if you consume from 0, a lot of messages will be consume from disk rather than page cache.
When this happens, client should be responsible to determine where to consume. Propably from 0, but you may suffer from consuming a lot of messages from disk.

Related

How to ensure that JMSTemplate caches consumer i.e. com.ibm.mq.jms.MQQueueReceiver?

I am facing a scenario where the reply queue I connect to, runs out of handles. I have traced it to the fact that my JMS Producers are being cached but not my JMS consumers. I am able to send and receive messages just fine so there is no problem with connecting-sending-receiving to/from the queues. I am using the CachedConnectionFactory (SessionCacheSize = 10)with the target factory as com.ibm.mq.jms.MQQueueConnectionFactory while instantiating the jmsTemplate. Code snippet is as follows
:
:
String replyQueue = "MyQueue";// replyQueue which runs out of handles
messageCreator.setReplyToQueue(new MQQueue(replyQueue));
jmsTemplate.setReceiveTimeout(receiveTimeout);
jmsTemplate.send(destination, messageCreator);// Send to destination queue
Message message = jmsTemplate.receiveSelected(replyQueue,
String.format("JMSCorrelationID = '%s'", messageCreator.getMessageId()));
:
:
From the logs (jms TRACE is enabled) Producer is cached, so the destination queue "handle count" does not increase.
// The first time around (for Producer)
Registering cached JMS MessageProducer for destination[queue:///<destination>:com.ibm.mq.jms.MQQueueSender#c9x758b
// Second time around, the cached producer is reused
Found cached JMS MessageProducer for destination [queue:///<destination>]: com.ibm.mq.jms.MQQueueSender#c9x758b
However, the handles for the replyQueue keep increasing because for every call to that queue, I see a new JMS Consumer being registered. Ultimately the calls to open the replyQueue fail because of MQRC_HANDLE_NOT_AVAILABLE
// First time around
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#b3ytd25b
// Second time around, another MessageConsumer is registered !
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#re25b
My memory is a bit dim on this, but here is what is happening. You are receiving messages based on a message selector. This selector is always changing, however. As a test, either remove the selector or make it a constant and see what happens. So when you try to cache/pool based on connection/session/consumer, the consumer is always changing. This requires a new cache entry.
After you go through your 10 sessions, a new connection will be created, but the existing one is not closed. Increase your session count to 100, for example, and your connection count on the MQ broker should climb 10 time slower.
You need to create a new consumer for every message receive as your correlation ID is always changing. So just cache connection/session. No matter what you do, you will always have to round trip to the broker to ask for the new correlation ID.

How to retry a kafka message when there is an error - spring cloud stream

I'm pretty new to Kafka. I'm using spring cloud stream Kafka to produce and consume
#StreamListener(Sink.INPUT)
public void process(Order order) {
try {
// have my message processing
}
catch( exception e ) {
//retry here that record..
}
}
}
Just want to know how can I implement a retry ? Any help on this is highly appreciated
Hy
There are multiple ways to handle "retries" and it depends on the kind of events you encounter.
For basic issues kafka framework will retry for you to recover from an error condition, for example in case of a short network downtime the consumer and producer api implement auto retry.
In particular kafka support "built-in producer/consumer retries" to correctly handle a large variety of errors without loss of messages, but as a developer, you must still be able to handle other types of errors with the try-catch block you mention.
Error in kafka can be divided in the following categories:
(producer & consumer side) Nonretriable broker errors such as errors regarding message size, authorization errors, etc -> you must handle them in "design phase" of your app.
(producer side) Errors that occur before the message was sent to the broker—for example, serialization errors --> you must handle them in the runtime app execution
(producer & consumer sideErrors that occur when the producer exhausted all retry attempts or when the
available memory used by the producer is filled to the limit due to using all of it to store messages while retrying -> you should handle these errors.
Another point of attention regarding "how to retry" is how to handle correctly the order of commits in case of auto-commit option is set to false.
A common and simple pattern to get commit order right is to use a monotonically increasing sequence number. Increase the sequence number every time you commit and add the sequence number at the time of the commit to the commit function.
When you’re getting ready to send a retry, check if the
commit sequence number the callback got is equal to the instance
variable; if it is, there was no newer commit and it is safe to retry. If
the instance sequence number is higher, don’t retry because a
newer commit was already sent.

Possible Reasons of Reconsuming Kafka Messages

Yesterday I found from log the kafka was reconsuming some messages after the Kafka group coordinator initiated a group rebalance. These messages had been consumed two days ago (confirmed from log).
There were two other rebalancing reported in the log, but they didn't reconsume messages anymore. So why the first time reblancing would cause reconsuming messages? What were the problems?
I am using the golang kafka client. here are the code
config := sarama.NewConfig()
config.Version = version
config.Consumer.Offsets.Initial = sarama.OffsetOldest
and we are handling messges before claiming messages, so seems we are using the Send At Least Once strategy for kafka. We have three brokers in one machine, and only one consumer thread (go routine) in the other machine.
Any explanations for this phoenomenon?
I think the messages must have been committed, coz they were consumed two days ago, or why would kafka keep offsets for more than two days without committing?
Consuming Code sample:
func (consumer *Consumer) ConsumeClaim(session
sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error {
for message := range claim.Messages() {
realHanlder(message) // consumed data here
session.MarkMessage(message, "") // mark offset
}
return nil
}
Added:
Rebalancing happened after app restarted. There were two other restarts which didn't cuase reconnsume
configs of kafka
log.retention.check.interval.ms=300000
log.retention.hours=168
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable = true
auto.create.topics.enable=false
By reading the source code of both golang saram client and kafka server, finally I found the reason as below
Consumer group offset retention time is 24hours, which is a default setting by kafka, while log retention is 7days explicitly set by us.
My server app is running in test environment where few people can visit, which means there may be few messages produced by kafka producer, and then the consumer group has few messages to consumes, thus the consumer may not commit any offset for long time.
When the consume offset is not updated for more than 24hours, due to the offset config, the kafka broker/coordinator will remove the consume offset from partitions. Next time the saram queries from kafka broker where the offset is, of course client gets nothing. Notice we are using sarama.OffsetOldest as initial value, then sarama client will consume messages from the start of messages kept by kafka broker, which results in messages reconsuming, and this is likely to happen because log retention is 7days

Kafka Producer Thread, huge amound of threads even when no message is send

I profiled my kafka producer spring boot application and found many "kafka-producer-network-thread"s running (47 in total). Which would never stop running, even when no data is sending.
My application looks a bit like this:
var kafkaSender = KafkaSender(kafkaTemplate, applicationProperties)
kafkaSender.sendToKafka(json, rs.getString("KEY"))
with the KafkaSender:
#Service
class KafkaSender(val kafkaTemplate: KafkaTemplate<String, String>, val applicationProperties: ApplicationProperties) {
#Transactional(transactionManager = "kafkaTransactionManager")
fun sendToKafka(message: String, stringKey: String) {
kafkaTemplate.executeInTransaction { kt ->
kt.send(applicationProperties.kafka.topic, System.currentTimeMillis().mod(10).toInt(), System.currentTimeMillis().rem(10).toString(),
message)
}
}
companion object {
val log = LoggerFactory.getLogger(KafkaSender::class.java)!!
}
}
Since each time I want to send a message to Kafka I instantiate a new KafkaSender, I thought a new thread would be created which then sends the message to the kafka queue.
Currently it looks like a pool of producers is generated, but never cleaned up, even when none of them has anything to do.
Is this behaviour intended?
In my opinion the behaviour should be nearly the same as datasource pooling, keep the thread alive for some time, but when there is nothing to do, clear it up.
When using transactions, the producer cache grows on demand and is not reduced.
If you are producing messages on a listener container (consumer) thread; there is a producer for each topic/partition/consumer group. This is required to solve the zombie fencing problem, so that if a rebalance occurs and the partition moves to a different instance, the transaction id will remain the same so the broker can properly handle the situation.
If you don't care about the zombie fencing problem (and you can handle duplicate deliveries), set the producerPerConsumerPartition property to false on the DefaultKafkaProducerFactory and the number of producers will be much smaller.
EDIT
Starting with version 2.8 the default EOSMode is now V2 (aka BETA); which means it is no longer necessary to have a producer per topic/partition/group - as long as the broker version is 2.5 or later.

Changing state of messages which are "in delivery"

In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763

Resources