I am using chronicle queue v4 for writing serialized object to queue. But I keep getting below Exception
Exception in thread "CLF-1" java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents.
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:228)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:28)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:298)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:232)
at net.openhft.chronicle.wire.MarshallableOut.writeDocument(MarshallableOut.java:68)
This is how my code looks
SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(queueFullPath).build();
ExcerptAppender queueWriter = queue.acquireAppender();
UserStat stat=new UserStat();//this is my object
byte[] bytes = convertObjectToBytes(stat);//custom serialization to convert java object to byte array
queueWriter.writeDocument(w -> w
.getValueOut().bytes(bytes));
There is nothing written to .cq4 file. But I see the last modified time changed everytime writeDocument() method is called.
Most likely (according to the stack trace) the file you're writing to is damaged. You need to clean it up and retry (and it seems you were using fairly old version). Try to test with new version of Chronicle Queue - chances are high it is solved.
Related
I have a SourceTask which has a simple poll method (completes quite fast). I found that the offsets value got from the context.offsetStorageReader is mostly stale, which means not matching the offsets value returned in the previous poll() method.
At the same time, I can observe from logs that the offsets value only get updated to "fresh" when "commitOffsets successfully" occurred.
My question is: is this designed on purpose? Should I decrease the "OFFSET_COMMIT_INTERVAL_MS_CONFIG" value to assure the offset is committed faster than the SourceTask.poll() method executed?
The comments of org.apache.kafka.connect.runtime.OffsetStorageWriter class says "Offset data should only be read during startup or reconfiguration of a task...", instead of being read in each execution of poll() method.
I had the same misconception about how Kafka Connect SourceTasks work:
Fetching the current offset in poll() only makes sense if you think about Tasks as "one-off" jobs: The Task is started, then poll() is called, which sends the record(s) to Kafka and persists its offset with it; then the Task is killed and the next Task will pick up the offset and continue reading data from the source.
This would not work well with Kafka, because the Connector partitions/offsets are themselves persisted in a Kafka topic - so there is no guarantee on how long the replication of the partition/offset value will take. This is why you receive stale offsets in the poll() method.
In reality, Tasks are started once, and after that their poll() method is called as long as the Task is running. So the Task can store its offset in memory (e.g. a field in the SourceTask deriving class) and access/update it during each poll(). An obvious result of this is that multiple tasks working on the same partition (e.g. same file, same table, ...) can lead to message duplication as their offsets are not in sync within the partition.
FileStreamSourceTask from the official Kafka repository is a good example on how reading offsets can be handled in a Source connector:
// stream is only null if the poll method has not run yet - only in this case the offset will be read!
if (stream == null) {
...
Map<String, Object> offset = context.offsetStorageReader().offset(Collections.singletonMap(FILENAME_FIELD, filename));
Object lastRecordedOffset = offset.get(POSITION_FIELD);
...
streamOffset = (lastRecordedOffset != null) ? (Long) lastRecordedOffset : 0L;
}
I am Passing data as JSONObject as String in Header and in next module/processor using below code but data on next module i received as "java.io.DataInputStream#7c6a857" and not able to convert it back to JSONObject as String.
messageOut = MessageBuilder.withPayload(message-payload).copyHeaders(payload.getHeaders()).setHeader("ActualRecord", JSONObject).build();
I assume you are using RabbitMQ as the transport.
The DefaultMessagePropertiesConverter used within the bus limits headers to 1024 bytes by default; anything larger is left as a DataInputStream in the headers. The MessagePropertiesConverter is configurable for this size, but XD does not currently expose a setting to allow this to be increased.
You need to read from the DataInputStream into a byte[] to restore the JSON.
I have opened a JIRA Issue to support configuration of the LongString limit.
Using Chronicle with vertx.io...
I create a new Chronicle per per verticle. I.e: one instance per thread.
chronicle = ChronicleQueueBuilder.indexed("samePath").build(); <-- This is created per thread to the same queue.
Now for each web http POST request I do... Each post is handle by exactly 1 thread at a time.
String message = request.toString();
ExcerptAppender appender = chronicle.createAppender();
// Configure the appender to write up to 100 bytes
appender.startExcerpt(message.length()+100);
// Copy the content of the Object as binary
appender.writeObject(message);
// Commit
appender.finish();
This seems to work. But is it ok?
This is not ok for IndexedChronicle whereas it is for VanillaChronicle.
If you can, best is to share the same VanillaChonicle instance among verticles (on the same process of course) and create an appender on demand.
Note that you can use WriteUtf* instead of writeObject to serialize strings as it is much more efficient.
I am using Spring's Reactor pattern in my web application. Internally it uses LMAX's RingBuffer implementation as one of it's message queues. I was wondering if there's any way to find out the current RingBuffer occupancy dynamically. It would help me determine the number of producers and consumers needed (and also their relative rates), and whether the RingBuffer as a message queue is being used optimally.
I tried the getBacklog() of the reactor.event.dispatch.AbstractSingleThreadDispatcher class, but it seems to always give the same value: the size of the RingBuffer I used while instantiating the reactor.
Any light on the problem would be greatly appreciated.
Use com.lmax.disruptor.Sequencer.remainingCapacity()
To have access to the instance of Sequencer you have to create it explicitly as well as RingBuffer.
In my case initialization of outcoming Disruptor
Disruptor<MessageEvent> outcomingDisruptor =
new Disruptor<MessageEvent>(
MyEventFactory.getInstance(),
RING_BUFFER_OUT_SIZE,
MyExecutor.getInstance(),
ProducerType.SINGLE, new BlockingWaitStrategy());
transforms into
this.sequencer =
SingleProducerSequencer(RING_BUFFER_OUT_SIZE, new BlockingWaitStrategy());
RingBuffer ringBuffer =
new RingBuffer<MessageEvent>(MyEventFactory.getInstance(), sequencer);
Disruptor<MessageEvent> outcomingDisruptor =
new Disruptor<MessageEvent>(ringBuffer, MyExecutor.getInstance());
and then
this.getOutCapacity() {
return sequencer.remainingCapacity();
}
UPDATE
small bug :|
we need outMessagesCount instead of getOutCapacity.
public long outMessagesCount() {
return RING_BUFFER_OUT_SIZE - sequencer.remainingCapacity();
}
The latest version of reactor-core in mvnrepository (version 1.1.4 RELEASE) does not have a way to dynamically monitor the state of the message queue. However, after going through the reactor code on github, I found the TraceableDelegatingDispatcher, which allows tracing the message queue (if supported by the underlying dispatcher implementation) at runtime, via its remainingSlots() method. The easiest option was to compile the source and use it.
I have a Java desktop client application that uploads files to a REST service.
All calls to the REST service are handled using the Spring RestTemplate class.
I'm looking to implement a progress bar and cancel functionality as the files being uploaded can be quite big.
I've been looking for a way to implement this on the web but have had no luck.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
I've even tried overriding the CommonsClientHttpRequestFactory.createRequest() method and implementing my own RequestEntity class with a special writeRequest() method but the same issue occurs (stream is all read before actually sending the post).
Am I looking in the wrong place? Has anyone done something similar.
A lot of the stuff I've read on the web about implementing progress bars talks about staring the upload off and then using separate AJAX requests to poll the web server for progress which seems like an odd way to go about it.
Any help or tips greatly appreciated.
This is an old question but it is still relevant.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
You were on the right track. Additionally, you also needed to disable request body buffering on the RestTemplate's HttpRequestFactory, something like this:
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory();
clientHttpRequestFactory.setBufferRequestBody(false);
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory);
Here's a working example for tracking file upload progress with RestTemplate.
There was not much detail about what this app is, or how it works so this response is vague but I believe you can do something like this to track your upload progress.
If this really is a Java Client App (i.e. Not HTML/JavaScript but a java program) and you really are having it upload a file as a stream then you should be able to track your upload progress by counting the bytes in the array being transmitted in the stream buffer and comparing that to the total byte count from the file object.
When you get the file get its size.
Integer totalFile = file.getTotalSpace();
Where ever you are transmitting as a stream you are presumably adding bytes to a output buffer of some kind
byte[] bytesFromSomeFileReader = [whatEverYouAreUsingToReadTheFile];
ByteArrayOutputStream byteStreamToServer = new ByteArrayOutputStream();
Integer bytesTransmitted = 0;
for (byte fileByte : bytesFromSomeFileReader) {
byteStreamToServer.write(fileByte);
//
// Update your progress bar every killo-byte sent.
//
bytesTransmitted++;
if( (bytesTransmitted % 1000) = 0) {
someMethodToUpdateProgressBar();
}
}