OpenHFT Chronicle Queue Version 4 - nextSynchronous? - chronicle

In version 3 of OpenHFT's Chronicle Queue there is an API call on ExcerptAppender (nextSynchronous(boolean)) to request that the contents of the queue be forced to be written to disk (fsync'd)when the next excerpt is finished. I don't see a similar call in version 4. Is it possible to achieve the same effect with version 4?

Not at the moment. If you need guarenteed writes we suggest acknowledged replication.
I suggest you add it as a github issue, however I can't say when this will be added.

Related

Should I set Kafka auto.offset.reset to earliest or latest?

I have a Spring Kafka Consumer application that lives in K8. Sometimes the application is recycled/restarted. When the consumer comes back on, I want it to consume all the messages that were produced while it was recycling. I experimented with auto.offset.rest=earliest and it works as expected, but I noticed that the default value for kafka is latest.
What are the risks imposed if I use earliest? In what scenarios I go with latest v.s earliest? I tried to find a post on here that explains it via a scenario but most of them were copy pasted from some documentation than real life example.
That property only applies if the broker has no committed offset for the group/topic/partition.
i.e. the first time the app runs or if the offsets expire (with modern brokers, the default expiration is when the consumer has not run for 7 days - 7 days after the last consumer left the group). With older brokers, offsets were expired much earlier - even if the consumer was still running, but hasn't received anything. The current behavior started with 2.1, IIRC.
When there is already a committed offset, that is used for the start position, and this property is ignored.
For most use cases, earliest is used but, as you say, latest is the default, which means "new" consumers will start at the end and won't get any records already in the topic.
So the "risk" is, if you don't run your app for a week, you'll get any unexpired records again. You can increase offset.retention.minutes to avoid that.

apache kafka consumer resume behaviour

I am currently working on Apache Kafka using go/golang confluent library. I have some doubts regarding consumer and its APIs.
I am using pause and resume APIs of the library and doing manual commits. Let's say, I send 100 messages and without committing, I paused consumer and resume it afterward. I notice it didn't consume those 100 messages again but it started consuming the latest messages. Is this an expected behavior? If yes, is there a way to consume those 100 messages again.
When I am resuming the consumers, after some processing, I am doing manual commit. I noticed the return offset committed is -1001 for a partition. I am not able to understand why it is happening and what it means? Did I lose all the data or the commit failed?
Can someone please explain to me auto.offset.reset - latest and earliest?
Thanks

MassTransit's PublisherConfirmation option no longer exists

Upgrading to MassTransit 4.x, overtop RabbitMQ. My application configuration was using PublisherConfirmation set to true, to ensure message delivery without the overhead of transactions. (In least, that was what the docs used to say.)
In MT 4.x., it appears that PublisherConfirmation no longer exists.
I haven't found any info (yet) on why this went away, or what replaces it moving forward. Essentially, I don't want fire-and-forget; if the message doesn't reach the queue I want an exception.
Any guidance would be appreciated.
To configure PublisherConfirmation using MT 4.x or later, that option is now configured on the host, instead of the bus.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.RabbitMqTransport/Configuration/IRabbitMqHostConfigurator.cs#L24

ActiveMQ - Automatically delete all messages from incative queue

I want to automatically delete all the messages from queues that where inactive for a specified amount of time (no new messaged was arrived on that time).
I don't want to explicitly empty the queue from code nor call purge explicitly as described here.
The configuration described here is also not appropriate to my case, since it deletes automatically only empty queues and my queues are not empty.
Is there any known ActiveMQ configuration that can do that task automatically?
I never had such requierement and I don't know if such functionality exists in activemq, however, there is two options you might be interested with :
1) If you want to purge messages on inactive queues because they a no longer relevant, you could set the time to live on each messages ( setTimeToLive() method on producer side)
2) If you need that exact behavior, then you could develop your own plugin. Indeed activemq brokers are fairly extensible (see : http://activemq.apache.org/developing-plugins.html)
Hope it helps.

Azure Cloud Service - Auto Scale by queue counts Invisible messages (not ready for processing)

We just developed a system that integrates azure queue with an azure cloud service to process batch items. One requirement we had was to have items be set in the future to process. So for example, we batch it now, but tell it not to start for 5 hours.
This is built right into azure queues AddMessage using initialVisibilityDelay, so we did not see this as being an issue. However, we just noticed when we add auto scale on our Cloud Service, it is going off the total items in queue. In our situation we added 100,000 queue items to be sent 5 days from now, however it is scaling assuming these 100,000 are ready to go right now.
So in our situation, we would basically have dozens of instances of our app running until these messages can even send, 5 days from now.
I feel like there is something simple we are missing here.
Any feedback would be very helpful.
Anthony
Have you considered using one queue for the waiting messages and another queue for the actual messages to be processed and scaling on that latter queue?

Resources