Chronicle Roll Files Daily - chronicle

I am trying to implement Chronicle Queue into our system and had a question regarding rolling of files daily but at a specific time as per the local time zone of the process. I read few write-ups regarding how to specify roll cycle but as per documentation the epoch time works as per midnight UTC. What would I need to do to configure a roll cycle let's say every day at 5PM local time zone of the process running? Any suggestions?
public class TestRollCycle {
public class TestClass implements TestEvent {
private int counter = 1;
#Override
public void setOrGetEvent(String event) {
System.out.println("Counter Read Value: " + counter);
counter++;
}
}
public interface TestEvent {
void setOrGetEvent(String event);
}
#Test
public void testRollProducer() {
int insertCount = 1;
String pathOfFile = "rollPath";
// Epoch is 5:15PM EDT
SingleChronicleQueue producerQueue = SingleChronicleQueueBuilder.binary(pathOfFile).epoch(32940000).build();
ExcerptAppender myAppender = producerQueue.acquireAppender();
TestEvent eventWriter = myAppender.methodWriter(TestEvent.class);
while (true) {
String testString = "Insert String";
eventWriter.setOrGetEvent(testString);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Counter Write Value: " + insertCount);
insertCount++;
}
}
#Test
public void testRollConsumer() throws InterruptedException {
String pathOfFile = "rollPath";
// Epoch is 5:15PM EDT
SingleChronicleQueue producerQueue = SingleChronicleQueueBuilder.binary(pathOfFile).epoch(32940000).build();
TestClass myClass = new TestClass();
ExcerptTailer trailer = producerQueue.createTailer();
MethodReader methodReader = trailer.methodReader(myClass);
while (true) {
if (!methodReader.readOne()) {
Thread.sleep(1000);
} else {
//System.out.println(trailer.index());
}
}
}
}

This is a feature we added to Chronicle Queue Enterprise. I suggest you contact sales#chronicle.software if you are will to pay for it.

I think there's a problem in your test - the epoch of 32940000 supplied to the queue builder is 9hr 15m from midnight, so 9:15AM UTC or 5:15AM EDT. It should be another 12 hours later for the roll-time to be 5:15PM.
I've added a test that documents the current behaviour for your use-case, and it passes as expected. Can you double-check that you're supplying the correct epoch offset, and perhaps implement a StoreFileListener in order to capture/log any roll events.
The roll will not actually occur until an event is written to the queue that is after the roll-time boundary. So an idle queue that is not being written-to will not roll without input events.
The test is on github:
https://github.com/OpenHFT/Chronicle-Queue/blob/master/src/test/java/net/openhft/chronicle/queue/impl/single/QueueEpochTest.java

Related

Springboot Kafka #Listener consumer pause/resume not working

I have a springboot Kafka Consumer & Producer. The consumer is expected to read data from topic 1 by 1, process(time consuming) it & write it to another topic and then manually commit the offset.
In order to avoid rebalancing, I have tried to call pause() and resume() on KafkaContainer but the consumer is always running & never responds to pause() call, tried it even with a while loop and faced no success(unable to pause the consumer). KafkaListenerEndpointRegistry is Autowired.
Springboot version = 2.6.9, spring-kafka version = 2.8.7
#KafkaListener(id = "c1", topics = "${app.topics.topic1}", containerFactory = "listenerContainerFactory1")
public void poll(ConsumerRecord<String, String> record, Acknowledgment ack) {
log.info("Received Message by consumer of topic1: " + value);
String result = process(record.value());
producer.sendMessage(result + " topic2");
log.info("Message sent from " + topicIn + " to " + topicOut);
ack.acknowledge();
log.info("Offset committed by consumer 1");
}
private String process(String value) {
try {
pauseConsumer();
// Perform time intensive network IO operations
resumeConsumer();
} catch (InterruptedException e) {
log.error(e.getMessage());
}
return value;
}
private void pauseConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isRunning()) {
log.info("Attempting to pause consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).pause();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
private void resumeConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isContainerPaused() || registry.getListenerContainer("c1").isPauseRequested()) {
log.info("Attempting to resume consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).resume();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
Am I missing something? Could someone please guide me with the right way of achieving the required behaviour?
You are running the process() method on the listener thread so pause/resume will not have any effect; the pause only takes place when the listener thread exits the listener method (and after it has processed all the records received by the previous poll).
The next version (2.9), due later this month, has a new property pauseImmediate, which causes the pause to take effect after the current record is processed.
You can try like this. This work for me
public class kafkaConsumer {
public void run(String topicName) {
try {
Consumer<String, String> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Collections.singleton(topicName));
while (true) {
try {
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(80000));
for (TopicPartition partition : consumerRecords.partitions()) {
List<ConsumerRecord<String, String>> partitionRecords = consumerRecords.records(partition);
for (ConsumerRecord<String, String> record : partitionRecords) {
kafkaEvent = record.value();
consumer.pause(consumer.assignment());
/** Implement Your Business Logic Here **/
Once your processing done
consumer.resume(consumer.assignment());
try {
consumer.commitSync();
} catch (CommitFailedException e) {
}
}
}
} catch (Exception e) {
continue;
}
}
} catch (Exception e) {
}
}

JMS Configuring backoff/retry without blocking onMessage()

javax.JMS version 2.0.1
Provider : ibm.mq v9.0
Framework : Java Spring boot
From what I know, onMessage() is asynchronous. I am successfully retrying the message send. However, the re-sending of messages happens instantaneously after a message failure. Ideally I want the retry to happen in a sliding window style eg. First retry after 20 seconds, second retry after 40 etc.
How can I achieve this without a Thread.Sleep() which, I presume, will block the entire Java thread and is not something I want at all ?
Code is something like this
final int TIME_TO_WAIT = 20;
public void onMessage(Message , message)
{
:
:
int t = message.getIntProperty("JMSXDeliveryCount");
if(t > 1)
{
// Figure out a way to wait for (TIME_TO_WAIT * t)
}
}
catch(Exception e)
{
// Do some logging/cleanup etc.
throw new RunimeException(e);// this causes a message retry
}
I would suggest you use exponential backoff in the retry logic, but you would need to use the Delivery Delay feature.
Define a custom JmsTemplate that will use delay property from the message, you should add retry count in the message property as well so that you can delay as per your need like 20, 40, 80, 160, etc
public class DelayedJmsTemplate extends JmsTemplate {
public static String DELAY_PROPERTY_NAME = "deliveryDelay";
#Override
protected void doSend(MessageProducer producer, Message message) throws JMSException {
long delay = -1;
if (message.propertyExists(DELAY_PROPERTY_NAME)) {
delay = message.getLongProperty(DELAY_PROPERTY_NAME);
}
if (delay >= 0) {
producer.setDeliveryDelay(delay);
}
if (isExplicitQosEnabled()) {
producer.send(message, getDeliveryMode(), getPriority(), getTimeToLive());
} else {
producer.send(message);
}
}
}
Define Components, that will have the capability fo re-enqueue of the message, you can define this interface in the base message listener. The handleException method should do all the tasks of enqueue and computing delay etc. You may not always interested in enqueuing, in some cases, you would discard messages as well.
You can see a similar post-processing logic here
https://github.com/sonus21/rqueue/blob/4c9c5c88f02e5cf0ac4b16129fe5b880411d7afc/rqueue-core/src/main/java/com/github/sonus21/rqueue/listener/PostProcessingHandler.java
#Component
#Sl4j
public class MessageListener {
private final JmsTemplate jmsTemplate;
#Autowired
public MessageListener(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
#JmsListener(destination = "myDestination")
public void onMessage(Message message) throws JMSException {
try {
// do something
} catch (Exception e) {
handleException("myDestination", message, e);
}
}
// Decide whether the message should be ignored due to many retries etc
private boolean shouldBeIgnored(String destination, Message message) {
return false;
}
// add logic to compute delay
private long getDelay(String destination, Message message, int deliveryCount) {
return 100L;
}
private void handleException(String destination, Message message, Exception e) throws JMSException {
if (shouldBeIgnored(destination, message)) {
log.info("destination: {}, message: {} is ignored ", destination, message, e);
return;
}
if (message.propertyExists("JMSXDeliveryCount")) {
int t = message.getIntProperty("JMSXDeliveryCount");
long delay = getDelay(destination, message, t + 1);
message.setLongProperty(DELAY_PROPERTY_NAME, delay);
message.setIntProperty("JMSXDeliveryCount", t + 1);
jmsTemplate.send(destination, session -> message);
} else {
// no delivery count, is this the first message or should be ignored?
}
}
}

How to seek a particular offset in kafka listener method?

I am trying to seek offset from a SQL database in my kafka listener method .
I have used registerSeekCallback method in my code but this method gets invoked when we run the consumer (or container is started) . Let's say my consumer is running and last committed offset is 20 in MySql database. I manually change the last committed offset in Mysql database to 11 but my consumer will keep reading from 21 unless i restart my consumer(container restarted) . I am looking out for any option if i can override or seek offset in my listener method itself. Any help would be appreciated.
public class Listen implements ConsumerSeekAware
{
#Override
public void registerSeekCallback(ConsumerSeekCallback callback)
{
// fetching offset from a database
Integer offset = offsetService.getOffset();
callback.seek("topic-name",0,offset);
}
#KafkaListener(topics = "topic-name", groupId = "group")
public void listen(ConsumerRecord record Acknowledgment acknowledgment) throws Exception
{
// processing the record
acknowledgment.acknowledge(); //manually commiting the record
// committing the offset to MySQL database
}
}
Editing with new listener method :-
#KafkaListener(topics = "topic-name", groupId = "group")
public void listen(ConsumerRecord record Acknowledgment acknowledgment,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer)) throws Exception {
// seeking old offset stored in database (which is 11 )
consumer.seek(partition,offsetService.getOffset());
log.info("record offset is {} and value is {}" , record.offset(),record.value() );
acknowledgment.acknowledge();
}
In database my last committed offset is 11 and last committed offset on kafka end is 21. When i wrote a new record in kafka topic(i.e on offset 22) , my consumer triggers and processes 22 offset first then it goes back to seek offset 11 & start processing from there.
why is it consuming offset 22 first although i am seeking offset 11 ?
With my above code , every time i write a new message to my kafka top it processes that record first then it seeks the offset present in my database . Is there any way i can avoid that ?
There are several techniques in this answer.
Bear in mind that performing a seek on the consumer will not take effect until the next poll (any records fetched on the last poll will be sent to the consumer first).
EDIT
Here's an example:
#SpringBootApplication
public class So63429201Application {
public static void main(String[] args) {
SpringApplication.run(So63429201Application.class, args).close();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template, Listener listener) {
return args -> {
IntStream.range(0, 10).forEach(i -> template.send("so63429201", i % 3, null, "foo" + i));
Thread.sleep(8000);
listener.seekToTime(System.currentTimeMillis() - 11000);
Thread.sleep(8000);
listener.seekToOffset(new TopicPartition("so63429201", 0), 11);
Thread.sleep(8000);
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so63429201").partitions(3).replicas(1).build();
}
}
#Component
class Listener extends AbstractConsumerSeekAware {
#KafkaListener(id = "so63429201", topics = "so63429201", concurrency = "2")
public void listen(String in) {
System.out.println(in);
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
System.out.println(assignments);
super.onPartitionsAssigned(assignments, callback);
callback.seekToBeginning(assignments.keySet());
}
public void seekToTime(long time) {
getSeekCallbacks().forEach((tp, callback) -> callback.seekToTimestamp(tp.topic(), tp.partition(), time));
}
public void seekToOffset(TopicPartition tp, long offset) {
getSeekCallbackFor(tp).seek(tp.topic(), tp.partition(), offset);
}
}
Starting with spring kafka version 2.5.5, we can apply an initial offset to all assigned partitions:
#KafkaListener( groupId = "group_json", containerFactory = "userKafkaListenerFactory", topicPartitions =
{#org.springframework.kafka.annotation.TopicPartition(topic = "Kafka_Topic", partitions = {"0"},
partitionOffsets = #PartitionOffset(partition = "*", initialOffset = "3")),
#org.springframework.kafka.annotation.TopicPartition(topic = "Kafka_Topic_2", partitions = {"0"},
partitionOffsets = #PartitionOffset(partition = "*", initialOffset = "4"))
})
public void consumeJson(User user, ConsumerRecord<?, ?> consumerRecord, Acknowledgment acknowledgment) throws Exception {
/*
Reading the message into a String variable.
*/
String message = consumerRecord.value().toString();
}
Source: https://docs.spring.io/spring-kafka/docs/2.5.5.RELEASE/reference/html/#reference

Call a method on a specific dates using ThreadPoolTaskExecutor

I have a method that I wish to run once using Spring and it needs to run on a given java.util.Date (or LocalDateTime alternatively). I am planning to persist all of the dates that the method should execute to a data source. It should run asynchronously.
One way is to check the DB every day for a date and execute the method if the date has passed and hasn't been executed. Is there a better way?
I know that Spring offers a ThreadPoolTaskScheduler and a ThreadPoolTaskExecutor. I am looking at ScheduledFuture schedule(Runnable task, Date startTime) from the TaskScheduler interface. Would I need to create a Runnable Spring managed bean just to call my method? Or is there a simpler annotation that would do this? An example would really help.
(Looked here too.)
By externalizing the scheduled date (to a database), the typical scheduling practices (i.e. cron based, or fixed scheduling) no longer apply. Given a target Date, you can schedule the task accurately as follows:
Date now = new Date();
Date next = ... get next date from external source ...
long delay = next.getTime() - now.getTime();
scheduler.schedule(Runnable task, delay, TimeUnit.MILLISECONDS);
What remains is to create an efficient approach to dispatching each new task.
The following has a TaskDispatcher thread, which schedules each Task based on the next java.util.Date (which you read from a database). There is no need to check daily; this approach is flexible enough to work with any scheduling scenario stored in the database.
To follow is working code to illustrate the approach.
The example Task used; in this case just sleeps for a fixed time. When the task is complete, the TaskDispatcher is signaled through a CountDownLatch.
public class Task implements Runnable {
private final CountDownLatch completion;
public Task(CountDownLatch completion) {
this.completion = completion;
}
#Override
public void run() {
System.out.println("Doing task");
try {
Thread.sleep(60*1000); // Simulate the job taking 60 seconds
} catch (InterruptedException e) {
e.printStackTrace();
}
completion.countDown(); // Signal that the job is complete
}
}
The dispatcher is responsible for reading the database for the next scheduled Date, launching a ScheduledFuture runnable, and waiting for the task to complete.
public class TaskDispatcher implements Runnable {
private static final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private boolean isInterrupted = false;
#Override
public void run() {
while (!isInterrupted) {
Date now = new Date();
System.out.println("Reading database for next date");
Date next = ... read next data from database ...
//Date next = new Date(); // Used as test
//next.setTime(now.getTime()+10*1000); // Used as test
long delay = next.getTime() - now.getTime();
System.out.println("Scheduling next task with delay="+delay);
CountDownLatch latch = new CountDownLatch(1);
ScheduledFuture<?> countdown = scheduler.schedule(new Task(latch), delay, TimeUnit.MILLISECONDS);
try {
System.out.println("Blocking until the current job has completed");
latch.await();
} catch (InterruptedException e) {
System.out.println("Thread has been requested to stop");
isInterrupted = true;
}
if (!isInterrupted)
System.out.println("Job has completed normally");
}
scheduler.shutdown();
}
}
The TaskDispatcher was started as follows (using Spring Boot) - start the thread as you normally do with Spring:
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor(); // Or use another one of your liking
}
#Bean
public CommandLineRunner schedulingRunner(TaskExecutor executor) {
return new CommandLineRunner() {
public void run(String... args) throws Exception {
executor.execute(new TaskDispatcher());
}
};
}
Let me know if this approach will work for your use case.
Take a look at the #Scheduled annotation. It may accomplish what you're looking for.
#Scheduled(cron="*/5 * * * * MON-FRI")
public void scheduledDateWork() {
Date date = new Date(); //or use DAO call to look up date in database
executeLogic(date);
}
Cron Expression Examples from another answer:
"0 0 * * * *" = the top of every hour of every day.
"*/10 * * * * *" = every ten seconds.
"0 0 8-10 * * *" = 8, 9 and 10 o'clock of every day.
"0 0/30 8-10 * * *" = 8:00, 8:30, 9:00, 9:30 and 10 o'clock every day.
"0 0 9-17 * * MON-FRI" = on the hour nine-to-five weekdays
"0 0 0 25 12 ?" = every Christmas Day at midnight

Broadcasting using the protocol Zab in ZooKeeper

Good morning,
I am new to ZooKeeper and its protocols and I am interested in its broadcast protocol Zab.
Could you provide me with a simple java code that uses the Zab protocol of Zookeeper? I have been searching about that but I did not succeed to find a code that shows how can I use Zab.
In fact what I need is simple, I have a MapReduce code and I want all the mappers to update a variable (let's say X) whenever they succeed to find a better value of X (i.e. a bigger value). In this case, the leader has to compare the old value and the new value and then to broadcast the actual best value to all mappers. How can I do such a thing in Java?
Thanks in advance,
Regards
You don't need to use the Zab protocol. Instead you may follow the below steps:
You have a Znode say /bigvalue on Zookeeper. All the mappers when starts reads the value stored in it. They also put an watch for data change on the Znode. Whenever a mapper gets a better value, it updates the Znode with the better value. All the mappers will get notification for the data change event and they read the new best value and they re-establish the watch for data changes again. That way they are in sync with the latest best value and may update the latest best value whenever there is a better value.
Actually zkclient is a very good library to work with Zookeeper and it hides a lot of complexities ( https://github.com/sgroschupf/zkclient ). Below is an example that demonstrates how you may watch a Znode "/bigvalue" for any data change.
package geet.org;
import java.io.UnsupportedEncodingException;
import org.I0Itec.zkclient.IZkDataListener;
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.exception.ZkMarshallingError;
import org.I0Itec.zkclient.exception.ZkNodeExistsException;
import org.I0Itec.zkclient.serialize.ZkSerializer;
import org.apache.zookeeper.data.Stat;
public class ZkExample implements IZkDataListener, ZkSerializer {
public static void main(String[] args) {
String znode = "/bigvalue";
ZkExample ins = new ZkExample();
ZkClient cl = new ZkClient("127.0.0.1", 30000, 30000,
ins);
try {
cl.createPersistent(znode);
} catch (ZkNodeExistsException e) {
System.out.println(e.getMessage());
}
// Change the data for fun
Stat stat = new Stat();
String data = cl.readData(znode, stat);
System.out.println("Current data " + data + "version = " + stat.getVersion());
cl.writeData(znode, "My new data ", stat.getVersion());
cl.subscribeDataChanges(znode, ins);
try {
Thread.sleep(36000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
#Override
public void handleDataChange(String dataPath, Object data) throws Exception {
System.out.println("Detected data change");
System.out.println("New data for " + dataPath + " " + (String)data);
}
#Override
public void handleDataDeleted(String dataPath) throws Exception {
System.out.println("Data deleted " + dataPath);
}
#Override
public byte[] serialize(Object data) throws ZkMarshallingError {
if (data instanceof String){
try {
return ((String) data).getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
return null;
}
#Override
public Object deserialize(byte[] bytes) throws ZkMarshallingError {
try {
return new String(bytes, "UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
return null;
}
}

Resources