Should I set Kafka auto.offset.reset to earliest or latest? - spring-boot

I have a Spring Kafka Consumer application that lives in K8. Sometimes the application is recycled/restarted. When the consumer comes back on, I want it to consume all the messages that were produced while it was recycling. I experimented with auto.offset.rest=earliest and it works as expected, but I noticed that the default value for kafka is latest.
What are the risks imposed if I use earliest? In what scenarios I go with latest v.s earliest? I tried to find a post on here that explains it via a scenario but most of them were copy pasted from some documentation than real life example.

That property only applies if the broker has no committed offset for the group/topic/partition.
i.e. the first time the app runs or if the offsets expire (with modern brokers, the default expiration is when the consumer has not run for 7 days - 7 days after the last consumer left the group). With older brokers, offsets were expired much earlier - even if the consumer was still running, but hasn't received anything. The current behavior started with 2.1, IIRC.
When there is already a committed offset, that is used for the start position, and this property is ignored.
For most use cases, earliest is used but, as you say, latest is the default, which means "new" consumers will start at the end and won't get any records already in the topic.
So the "risk" is, if you don't run your app for a week, you'll get any unexpired records again. You can increase offset.retention.minutes to avoid that.

Related

what are the retry settings for subscriber in pubsub and how to set them correctly in a spring application?

I have a spring service subcribing for messages from a topic in the google cloud pubsub (pulling). It is working correctly in general. But I want to have more control over resent messages. My service need sometimes to nack the message or just let the ackDeadline pass so that I would get the message later on again. While testing with single messages, the nacked message comes back to me almost immidetaly, and the ones I don't ack or nack at all, come back after 10 sec default for ackDeadline. I would like it to postpone the repeated consuming of these messages. I thought the retry setting are designed for such cases.
I should mention as well that I am currently testing locally with an emulator and create the subscription from code. I am using the PubSubAdmin for managing.
According to this docu I have tried to set those configuration in my profile config. like this:
spring.cloud.gcp.pubsub.subscriber.retry.initial-retry-delay-second: 4
spring.cloud.gcp.pubsub.subscriber.retry.max-attempts: 5
spring.cloud.gcp.pubsub.subscriber.retry.initial-rpc-timeout-seconds: 4
spring.cloud.gcp.pubsub.subscriber.retry.max-rpc-timeout-seconds: 8
spring.cloud.gcp.pubsub.subscriber.retry.max-retry-delay-seconds: 7
spring.cloud.gcp.pubsub.subscriber.retry.total-timeout-seconds: 3000
but it had no effect on the time of reoccuring of the messages.
Do I understand the meaning of retry settings wrongly? maybe they only take effect if there are some connection problems but not in nacking or lacking of acknowledgment cases? Or do I have to set the setting while using deploymentManager for creating the subscriptions and am not allowed to set them from the code? Or maybe setting them in (development) profile configs won't work with the PubSubAdmin?
Thanks for any suggestions!
edit: I want the first retry to happen after 5 seconds, but next retry 10 seconds later, etc. Plus I want to set the max retry number. So what I am not interested in is setting the ackDeadline just to a bigger number.
edit2: why nacking: one of the services (let's call it a bridge) is subscribing for the messages, has to validate each message and if ok pass it to another external system. this service is acting as a bridge for this system, as we can't work on this second system directly. in some cases the message need some extra information, so the bridge will try to fetch it somewhere else (there are a lot of microservices included) and it happens sometimes, that at this moment in time the extra information is not there (yet). So the first idea was to not ack the message and let it come later again. but I don't want to ask every 10 sec for the next 7 days (with ackDeadline), I want to just try few times, and if it is not there after 2 hours, it will never came. so we tried to nack and hoped those retry settings can help to manage the resending. But as they don't, I suppose the only way to go will be to build something for managing these messages in the bridge by myself. Maybe store message ids and the number of retry so that I can ack after for example 5 times and push the message to another topic to deal with it differently. Or are there any better solutions known?
Cloud Pub/Sub does not provide exponential backoff for specific messages. A nack has no effect other than to tell Cloud Pub/Sub that you were not able to handle the message.
I could provide a more useful answer if you were to document why you needed to nack the messages. If you are unable to handle the current load, you can use the flow control options described here to reduce the number of outstanding messages or bytes to your client. If you have messages that are known to be bad, you should instead ack them after pushing to another dead letter topic to be handled separately.
Response to edit 2:
If you have this scenario where the action to supplement the messages can fail, implement whatever backoff mechanism you want on that action yourself in your service. Set the max ack extension period when constructing your subscriber (setMaxAckExtensionPeriod in java) to ensure that your client will extend the ack deadline for each message long enough for your chain of retries.
Edit 2
Note that Pub/Sub now has built in support for Dead Lettering.
You can use PubSubSubscriberTemplate.modifyAckDeadline() to programmatically extend the deadlines of a batch of messages retrieved through pull. Each individual AcknowledgeablePubsubMessage also has a modifyAckDeadline() method, if you only need to extend deadline for a select few stragglers.
If all messages on that particular subscription need to have a longer acknowledgement period, a default can be set in GCP Console by editing the subscription and updating the "Acknowledgement Deadline" field.

Azure Queue delayed message

I has some strange behaviour on production deployment for azure queue messages:
Some of the messages in queues appears with big delay - minutes, and sometimes 10 minutes.
Befere you ask about setting delayTimeout when we put message to queue - we do not set delayTimeout for that message, so message should appear almost immedeatly after it was placed in queue.
At that moments we do not have a big load. So my instances has no work load, and able to process message fast, but they just don't appear.
Our service process millions of messages per month, we able to identify that 10-50 messages processed with very big delay, by that we fail SLA in front of our customers.
Does anyone have any idea what can be reason?
How to overcome?
Did anyone faced similar issues?
Some general ideas for troubleshooting:
Are you certain that the message was queued up for processing - ie the queue.addmessage operation returned successfully and then you are waiting 10 minutes - meaning you can rule out any client side retry policies etc as being the cause of the problem.
Is there any chance that the time calculation could be subject to some kind of clock skew problems. eg - if one of the worker roles pulling messages has its close out of sync with the other worker roles you could see this.
Is it possible that in the situations where the message is appearing to be delayed that a worker role responsible for pulling the messages is actually failing or crashing. If the client calls GetMessage but does not respond with an appropriate acknowledgement within the time specified by the invisibilityTimeout setting then the message will become visible again as the Queue Service assumes the client did not process the message. You could tell if this was a contributing factor by looking at the dequeue count on these messages that are taking longer. More information can be found here: http://msdn.microsoft.com/en-us/library/dd179474.aspx.
Is it possible that the number of workers you have pulling items from the queue is insufficient at certain times of the day and the delays are simply caused by the queue being populated faster than you can pull messages from the queue.
Have you enabled logging for queues and then looked to see if you can find the specific operations (look at e2elatency and serverlatency).
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/. You should also enable client logging and try to determine if the client is having connectivity problems and the retry logic is possibly kicking in.
And finally if none of these appear to help can you please send me the server logs (and ideally the client side logs as well) along with your account information (no passwords) to JAHOGG at Microsoft dot com.
Jason
Azure Service bus has a property in the BrokeredMessage class called ScheduledEnqueueTimeUtc, it allows you to set a time for when the message is added to the queue (effectively creating a delay).
Are you sure that in your code your not setting this property, and this might be the cause for the delay?
You can find more info on this at this url: https://www.amido.com/azure-service-bus-how-to-delay-a-message-being-sent-to-the-queue/
If you are using WebJobs to process messages from the queue, it can be due to WebJobs configuration.
From an MSDN forum post by pranav rastogi:
Starting with 0.4.0-beta, the (WebJobs) SDK implements a random exponential back-off algorithm. As a result of this if there are no messages on the queue, the SDK will back off and start polling less frequently.
The following setting allows you to configure this behavior.
MaxPollingInterval for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 10min.
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.MaxPollingInterval = TimeSpan.FromMinutes(1);
JobHost host = new JobHost(config);
host.RunAndBlock();
}

Scheduling a MDB

I'm looking for a way to schedule a MDB. My requirement is that the MDB is set to feed a system from the company. This system goes out for maintenance every night, but the other systems don't know about it and may keep trying to feed it. A persistent queue is great in the way that my messages could be pilled until system goes back online.
How could I manage that? I've run into that already: schedule a message driven bean to access a queue during certain times? but it uses java 7, and worst, message is lost if the server restarts (messages is taken out of the JMS Queue and kept in memory until timer process it).
Another use of this would be to implement a "retry" queue. In case of error I want to retry processing my message, but not immediately, after a certain amount time only.
Any ideas to keep my MDB offline for a certain amount of time?
Most versions of JBoss publish a management MBean that allows you to stop delivery on a MDB.
If you're using EJB3, however, they auto-start, so you will need to register a startup class to stop starting MDBs at boot time if boots occur in your MDB's blackout period. Once past that snafu, you can schedule a simple quartz job to start and stop the MDBs according to your delivery windows.
Well, it looks like there is no way to pause a MDB in a generic way. The best solution is, like most people will answer, to use the DLQ (or DMQ).
Now, if I want to introduce a timer on a message, I set the time to live of the producer to the amount of time I want the message to wait. Then I send it to a normal queue, lets say waitingQueue which has no consumer. After expiration, the message is sent to default destination (mq.sys.dmq for Glassfish MQ, make sure to create a jms resource with mq.sys.dmq as imqDestinationName). I have a MDB listening to the error queue and responsible of sending the message again. Now, if I want to "close" a queue for some time, when a message arrives in the queue, I check if current time is allowed or not. Just set the time to live to the amount of time before next opening hours and send it to waitingQueue.
The reason I didn't use it since the beginning is that I fell into a few pitfalls. Here are a few useful properties to set when using DMQ with Glassfish 3.1.1 and its embedded MQ.
imq.message.expiration.interval=1 that's for the poll interval on each queue before sending timed out messages to the DMQ. Default is 60 seconds. If like me you want to test your application with little latency, this is what you need.

WebSphere MQ v7.1 Channels going down

The sender and receiver channels between two queue managers (WebSphere MQ v7.1 running on Redhat Linux) that I have configured is going down pretty frequently. Any idea why? How can I debug this? Thanks.
Channels are expected to go down. The idea is that they stay active as long as there is traffic and then time out. Assuming they've been configured to trigger, the presence of a message on the XMitQ causes the channel to start up again.
The reason for this is that a triggered channel will generally restart if interrupted by a network failure or other adverse event. However if a channel is configured to stay running 24x7 then the only way it stops is due to one of these adverse events and that increases the likelihood that human intervention will be required to restart the channel. On the other hand, a channel that times out can survive all sorts of nasty network events that occur while it is inactive. Allowing it to time out when not in use thus improves overall reliability of the channel.
So how do you cause a channel to trigger? Make sure the transmission queue contains the TRIGGER, TRIGTYPE, TRIGDATA and INITQ attributes. For example, to define a transmission queue to the JUPITER QMgr:
DEF QL(JUPITER) +
USAGE(XMITQ) +
TRIGGER +
TRIGTYPE(FIRST) +
TRIGDATA('MYQMGR.JUPITER') +
INITQ(SYSTEM.CHANNEL.INITQ) +
REPLACE
The only variable of the bunch is TRIGDATA which contains the name of the channel serving this XMitQ.
Of course, the channel initiator must be running but in modern versions of WMQ it starts by default (based on the value of the queue manager's SCHINIT attribute) so generally will in fact be running.
The channel that is in STOPPED state cannot be triggered. By default the STOP CHL command uses STATUS(STOPPED) so most of the time manually stopping a channel prevents triggering. If you want to stop a channel in such a way that it will restart (for example to test triggering) use the STOP CHL(CHLNAME) STATUS(INACTIVE) command. If the channel is already in STOPPED state, either issue the START CHL command to make it start immediately or use the STOP CHL(CHLNAME) STATUS(INACTIVE) to change the status from STOPPED to INACTIVE without starting it.
Once the channels are up, the DISCINT attribute of the channel determines how long it will run before timing out. The value is in seconds and defaults to 600 which is 10 minutes. The DISCINT, KAINT and HBINT combine to determine when the channel comes down. Note that the TCP spec calls for things using keepalive to disable them by default so if you want to use keepalive on your channels, you must enable it in the QMgr tuning as described here.
Please see Triggering Channels in the Infocenter for more on the configuration details. Take a look at SupportPac MD0C WebSphere MQ - Keeping Channels Up and Running if you want to know more about the internals and tuning. (The SupportPac is a bit dated but the principles of tuning mostly still apply. Where there are discrepancies, the Infocenter is the authoritative version.)
If you want to keep channels up continuously, set DISCINT(0) but remember that triggering remains the preferred option. Some shops need to minimize response times during the business day and so set DISCINT to a value that allows the channels to time out at night but generally keeps them running all day. If for some reason you have triggering set up right and the channels go down prior to DISCIINT you should be able to check in the error logs for the reason why. These reside in the QMgr's directory under errors. For example, on UNIX/Linux they are in /var/mqm/qmgrs/qmgrname/errors and on Windows the default location is C:\Program Files(x86)\WebSphere MQ\QMgrs\qmgrname\errors. Look for the files named AMQERR??.LOG where ?? = 01, 02, or 03. The logs rotate where 01 is current, 02 is next and so on. If you have a very busy QMgr you need to capture these as soon as the channel goes down or they could roll off.

ActiveMQ with slow consumer skips 200 messages

I'm using ActiveMQ along with Mule (a kind of ESB based on Spring).
We got a fast producer and a slow consumer.
It's synchronous configuration with only one consumer.
Here the configuration of the consumer in spring style: http://pastebin.com/vweVd1pi
The biggest requirement is to keep the order of the messages.
However, after hours of running this code, suddenly, ActiveMQ skips 200 messages, and send the next ones.The 200 messages are still there in the activeMQ, they are not lost.
But our client (Mule), does have some custom code to check the order of the messages, using an unique identifier.
I had this issue already a few month ago. We change the consumer by using the parameter "jms.prefetchPolicy.queuePrefetch=1". It seemed to have worked well and to be the fix we needed unti now when the issue reappeared on another consumer.
Is it a bug, or a configuration issue ?
I can't talk about the requirement from a Mule perspective, but there are a couple of broker features that you should take a look at. There are two ways to guarantee message ordering in ActiveMQ:
Message groups are a way of ensuring that a set of related messages will be consumed by the same consumer in the order that they are placed on a queue. To use it you need to specify a JMSXGroupID header on related messages, and assign them an incrementing JMSXGroupSeq number. If a consumer dies, remaining messages from that group will be sent to another single consumer, while still preserving order.
Total message ordering applies to all messages on a topic. It is configured on the broker on a per-destination basis and requires no particular changes to client code. It comes with a synchronisation overhead.
Both features allow you to scale out to more than one consumer.

Resources