Can multiple Chronicle/ExcerptAppenders write to the same queue? - chronicle

Using Chronicle with vertx.io...
I create a new Chronicle per per verticle. I.e: one instance per thread.
chronicle = ChronicleQueueBuilder.indexed("samePath").build(); <-- This is created per thread to the same queue.
Now for each web http POST request I do... Each post is handle by exactly 1 thread at a time.
String message = request.toString();
ExcerptAppender appender = chronicle.createAppender();
// Configure the appender to write up to 100 bytes
appender.startExcerpt(message.length()+100);
// Copy the content of the Object as binary
appender.writeObject(message);
// Commit
appender.finish();
This seems to work. But is it ok?

This is not ok for IndexedChronicle whereas it is for VanillaChronicle.
If you can, best is to share the same VanillaChonicle instance among verticles (on the same process of course) and create an appender on demand.
Note that you can use WriteUtf* instead of writeObject to serialize strings as it is much more efficient.

Related

Save consumer/tailer read offset for ChronicleQueue

I am exploring ChronicleQueue to save events generated in one of my application.I would like to publish the saved events to a different system in its original order of occurrence after some processing.I have multiple instances of my application and each of the instance could run a single threaded appender to append events to ChronicleQueue.Although ordering across instances is a necessity,I would like to understand these 2 questions.
1)How would the index of the read index for my events be saved so that I don't end up reading and publishing the same message from chronicle queue multiple times.
In the below code(picked from the example in github) the index is saved till we reach the end of the queue while we restarted the application.The moment we reach the end of the queue,we end up reading all the messages again from the start.I want to make sure for a particular consumer identified by a tailer Id, the messages are read only once.Do i need to save the read index in another queue and use that to achieve what I need here.
String file = "myPath";
try (ChronicleQueue cq = SingleChronicleQueueBuilder.binary(file).build()) {
for(int i = 0 ;i<10;i++){
cq.acquireAppender().writeText("test"+i);
}
}
try (ChronicleQueue cq = SingleChronicleQueueBuilder.binary(file).build()) {
ExcerptTailer atailer = cq.createTailer("a");
System.out.println(atailer.readText());
System.out.println(atailer.readText());
System.out.println(atailer.readText());
}
2)Also need some suggestion if there is a way to preserve ordering of events across instances.
Using a named tailer should ensure that the tailer only reads a message once. If you have an example where this doesn't happen can you create a test to reproduce it?
The order of entries in a queue are fixed when writing and all tailer see the same messages in the same order, there isn't any option.

querying artemis queue size fails

In a spring boot application using artemis we try to avoid queues containing too many messages. The intention is to only put in new messages if the number of messages currently in the queue falls below a certain limit, e.g. 100 messages. However, that seems not to work but we don't know why or what the "correct" method would be to implement that functionality. The number of messages as extracted by the code below is always 0 although in the gui there are messages.
To reproduce the problem I installed apache-artemis-2.13.0 locally.
We are doing something like the following
if (!jmsUtil.queueHasNotMoreElementsThan(QUEUE_ALMOST_EMPTY_MAX_AMOUNT, reprocessingMessagingProvider.getJmsTemplate())) {
log.info("Queue has too many messages. Will not send more...");
return;
}
jmsUtil is implemented like
public boolean queueHasNotMoreElementsThan(int max, JmsOperations jmsTemplate) {
return Boolean.TRUE.equals(
jmsTemplate.browse((session, queueBrowser) -> {
Enumeration enumeration = queueBrowser.getEnumeration();
return notMoreElemsThan(enumeration, max);
}));
}
private Boolean notMoreElemsThan(Enumeration enumeration, int max) {
for (int i = 0; i <= max; i++) {
if (!enumeration.hasMoreElements()) {
return true;
}
enumeration.nextElement();
}
return false;
}
As a check I used additionally the following method to give me the number of messages in the queue directly.
public int countPendingMessages(String destination, JmsOperations jmsTemplate) {
Integer totalPendingMessages = jmsTemplate.browse(destination,
(session, browser) -> Collections.list(browser.getEnumeration()).size());
int messageCount = totalPendingMessages == null ? 0 : totalPendingMessages;
log.info("Queue {} message count: {}", destination, messageCount);
return messageCount;
}
That method of extracting the queue size seems to be used as well by others and is based on the documentation of QueueBrowser: The getEnumeration method returns a java.util.Enumeration that is used to scan the queue's messages.
Would the above be the correct way on how to obtain the queue size? If so, what could be the cause of the problem? If not, how should the queue size be queried? Does spring offer any other possibility of accessing the queue?
Update: I read another post and the documentation but I wouldn't know on how to obtain the ClientSession.
There are some caveats to using a QueueBrowser to count the number of messages in the queue. The first is noted in the QueueBrowser JavaDoc:
Messages may be arriving and expiring while the scan is done. The JMS API does not require the content of an enumeration to be a static snapshot of queue content. Whether these changes are visible or not depends on the JMS provider.
So already the count may not be 100% accurate.
Then there is the fact that there may be messages still technically in the queue which have been dispatched to a consumer but have not yet been acknowledged. These messages will not be counted by the QueueBrowser even though they may be cancelled back to the queue at any point if the related consumer closes its connection.
Simply put the JMS API doesn't provide a truly reliable way to determine the number of messages in a queue. Furthermore, "Spring JMS" is tied to the JMS API. It doesn't have any other way to interact with a JMS broker. Given that, you'll need to use a provider-specific mechanism to determine the message count.
ActiveMQ Artemis has a rich management API that is accessible though, among other things, specially constructed JMS messages. You can see this in action in the "Management" example that ships with ActiveMQ Artemis in the examples/features/standard/management directory. It demonstrates how to use JMS resources and provider-specific helper classes to get the message count for a JMS queue. This is essentially the same solution as given in the other post you mentioned, but it uses the JMS API rather than the ActiveMQ Artemis "core" API.

Chronicle queue : AssertionError (header inside header) when writing byte array

I am using chronicle queue v4 for writing serialized object to queue. But I keep getting below Exception
Exception in thread "CLF-1" java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents.
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:228)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:28)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:298)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:232)
at net.openhft.chronicle.wire.MarshallableOut.writeDocument(MarshallableOut.java:68)
This is how my code looks
SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(queueFullPath).build();
ExcerptAppender queueWriter = queue.acquireAppender();
UserStat stat=new UserStat();//this is my object
byte[] bytes = convertObjectToBytes(stat);//custom serialization to convert java object to byte array
queueWriter.writeDocument(w -> w
.getValueOut().bytes(bytes));
There is nothing written to .cq4 file. But I see the last modified time changed everytime writeDocument() method is called.
Most likely (according to the stack trace) the file you're writing to is damaged. You need to clean it up and retry (and it seems you were using fairly old version). Try to test with new version of Chronicle Queue - chances are high it is solved.

Regarding sub-topics in chronicle queue

I'm looking to write messages to a single queue. I'd like to use the sub-topics functionality, so that tailers can pick and choose either to read all of the sub-topics under one topic, or pick specific sub-topics to read from.
The documentation mentions that sub-topics are supported in a directory under the main topic, so in order to read from a subtopic, do we just create a new queue and point it to the sub-topic path?
SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary("Topic").build();
SingleChronicleQueue queue2 = SingleChronicleQueueBuilder.binary("Topic/SubTopic").build();
ExcerptAppender appender = queue.acquireAppender();
ExcerptAppender appender2 = queue2.acquireAppender();
appender.writeText("aaa");
appender2.writeText("bbb");
This will just output aaa, but i want it to output but aaa and bbb
There is no real concept of hierarchy in Chronicle-Queue; there is a one-to-one mapping between file-system directory and queue.
If you wish to filter certain records, you will need to do that when reading the records out of the queue. It will be up to your application to decide how to detect messages that should be filtered.
The documentation you refer to appears to have been copied from concepts that exist in Chronicle-Engine.

Getting the current number of messages in the ring buffer

I am using Spring's Reactor pattern in my web application. Internally it uses LMAX's RingBuffer implementation as one of it's message queues. I was wondering if there's any way to find out the current RingBuffer occupancy dynamically. It would help me determine the number of producers and consumers needed (and also their relative rates), and whether the RingBuffer as a message queue is being used optimally.
I tried the getBacklog() of the reactor.event.dispatch.AbstractSingleThreadDispatcher class, but it seems to always give the same value: the size of the RingBuffer I used while instantiating the reactor.
Any light on the problem would be greatly appreciated.
Use com.lmax.disruptor.Sequencer.remainingCapacity()
To have access to the instance of Sequencer you have to create it explicitly as well as RingBuffer.
In my case initialization of outcoming Disruptor
Disruptor<MessageEvent> outcomingDisruptor =
new Disruptor<MessageEvent>(
MyEventFactory.getInstance(),
RING_BUFFER_OUT_SIZE,
MyExecutor.getInstance(),
ProducerType.SINGLE, new BlockingWaitStrategy());
transforms into
this.sequencer =
SingleProducerSequencer(RING_BUFFER_OUT_SIZE, new BlockingWaitStrategy());
RingBuffer ringBuffer =
new RingBuffer<MessageEvent>(MyEventFactory.getInstance(), sequencer);
Disruptor<MessageEvent> outcomingDisruptor =
new Disruptor<MessageEvent>(ringBuffer, MyExecutor.getInstance());
and then
this.getOutCapacity() {
return sequencer.remainingCapacity();
}
UPDATE
small bug :|
we need outMessagesCount instead of getOutCapacity.
public long outMessagesCount() {
return RING_BUFFER_OUT_SIZE - sequencer.remainingCapacity();
}
The latest version of reactor-core in mvnrepository (version 1.1.4 RELEASE) does not have a way to dynamically monitor the state of the message queue. However, after going through the reactor code on github, I found the TraceableDelegatingDispatcher, which allows tracing the message queue (if supported by the underlying dispatcher implementation) at runtime, via its remainingSlots() method. The easiest option was to compile the source and use it.

Resources