After disk space is finish I got InternalError. Adding disk space wasn't fix problem.
Is it possible to restore and continue to persist?
May be on error I can try to recreate/close?
Creation of the queue queue = SingleChronicleQueueBuilder.binary(basePath)
.build();
Writing on a single thread "TradeReactorEventPersister-1"
ExcerptAppender appender = acquireAppender;
if (appender == null) {
appender = queue.acquireAppender();
acquireAppender = appender;
}
appender.writeBytes(BytesStore.wrap(b));
After next exceptions:
2019-08-23 08:13:26.963 +0000 ERROR [TradeReactorEventPersister-1] LoggingUncaughtExceptionHandler - Uncaught exception a fault occurred in a recent unsafe memory access operation in compiled Java code in thread TradeReactorEventPersister-1
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at net.openhft.chronicle.wire.AbstractWire.updateHeaderAssertions(AbstractWire.java:546)
at net.openhft.chronicle.wire.AbstractWire.updateHeader(AbstractWire.java:533)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeBytes(SingleChronicleQueueExcerpts.java:470)
2019-08-23 08:13:26.965 +0000 ERROR [TradeReactorEventPersister-1] LoggingUncaughtExceptionHandler - Uncaught exception a fault occurred in a recent unsafe memory access operation in compiled Java code in thread TradeReactorEventPersister-1
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at net.openhft.chronicle.wire.AbstractWire.updateHeaderAssertions(AbstractWire.java:547)
at net.openhft.chronicle.wire.AbstractWire.updateHeader(AbstractWire.java:533)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeBytes(SingleChronicleQueueExcerpts.java:470)
2019-08-23 08:13:27.166 +0000 ERROR [TradeReactorEventPersister-1] LoggingUncaughtExceptionHandler - Uncaught exception a fault occurred in a recent unsafe memory access operation in compiled Java code in thread TradeReactorEventPersister-1
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at net.openhft.chronicle.wire.AbstractWire.updateHeader(AbstractWire.java:511)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeBytes(SingleChronicleQueueExcerpts.java:470)
2019-08-23 08:13:27.167 +0000 ERROR [TradeReactorEventPersister-1] LoggingUncaughtExceptionHandler - Uncaught exception you cant put a header inside a header, check that you have not nested the documents. If you are using Chronicle-Queue please ensure that you have a unique instance of the Appender per thread, in other-words you can not share appenders across threads. in thread TradeReactorEventPersister-1
java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents. If you are using Chronicle-Queue please ensure that you have a unique instance of the Appender per thread, in other-words you can not share appenders across threads.
at net.openhft.chronicle.wire.AbstractWire.enterHeader(AbstractWire.java:322)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeHeader(SingleChronicleQueueExcerpts.java:405)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeBytes(SingleChronicleQueueExcerpts.java:463)
I couldn't persist after adding disk space.
I got last exception on every trying to persist event:
2019-08-23 08:22:50.746 +0000 ERROR [TradeReactorEventPersister-1] LoggingUncaughtExceptionHandler - Uncaught exception you cant put a header inside a header, check that you have not nested the documents. If you are using Chronicle-Queue please ensure that you have a unique instance of the Appender per thread, in other-words you can not share appenders across threads. in thread TradeReactorEventPersister-1
java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents. If you are using Chronicle-Queue please ensure that you have a unique instance of the Appender per thread, in other-words you can not share appenders across threads.
at net.openhft.chronicle.wire.AbstractWire.enterHeader(AbstractWire.java:322)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeHeader(SingleChronicleQueueExcerpts.java:405)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writeBytes(SingleChronicleQueueExcerpts.java:463)
Unfortunately chronicle queue works at the pretty low level, and it is unable to automatically repair itself after data corruption (and disk space will inevitably lead to data corruption). BTW to avoid this, Chronicle Queue will warn you if you are running low on disk space, you should've seen a warning message like:
"your disk " + this.fileStore + " is almost full, warning: chronicle-queue may crash if it runs out of space."
PS you shouldn't really need to do the lazy acquire logic. You can always simply do queue.acquireAppender() - it's a cheap call which gets preexisting appender from the ThreadLocal pool, and will not create the new appender every time.
Related
I need to record failure reason in metrics for each failed http call when using Vert.x WebClient. This compiles:
.onFailure()
.retry()
.withBackOff(Duration.ofMillis(INITIAL_RETRY_DELAY_MS))
.until(retryTimeExpired(wrapper))
I'm recording metrics in retryTimeExpired method. But at runtime I get this:
Caused by: java.lang.IllegalArgumentException: Invalid retry configuration, `when` cannot be used with a back-off configuration
at io.smallrye.mutiny.groups.UniRetry.when(UniRetry.java:156)
at io.smallrye.mutiny.groups.UniRetry.until(UniRetry.java:137)
I could of course add sleep but this is reactive. It would be possible to block for a short time but I would hate to block the thread. Any ideas how to do this without sleep?
You could try using many sequential onFailures. As long as the first doesn't handle the exception (recoverWithItem, recoverWithNull, recoverWithUni) or throw its own the next should observe the same failure.
service.apiMethod(key)
.invoke(record -> logger.info("Succeeded to read record."))
.onFailure()
.invoke(exception -> logger.warn("Failed to read record."))
.onFailure()
.retry().withBackOff(delay).indefinitely();
In my batch job, I have a single step with reading from Database , processing the record and writing back the same record to same table.(ie updating record with processed values or error reason if processing failed).
I am using AsyncItemProcessor for multi thread processing. When I get error in ItemProcessor.process() method, I throw an exception and batch job ends with FAILED status. This failed status is a requirement.
Because, its AsyncItemProcessor, I am unable to access ItemProcessListener.onProcessError().
How do I write the errorMessage to Item Table when there is an error ?
This is a known limitation of using the AsyncItemProcessor which is mentioned in its Javadoc:
While not an exhaustive list, things like StepExecution.filterCount will not
reflect the number of filtered items and
ItemProcessListener.onProcessError(Object, Exception) will not be called.
There is an open issue to update the reference documentation as well.
How do I write the errorMessage to Item Table when there is an error ?
The AsyncItemProcessor submits a FutureTask to the task executor and the only way to know if an exception happened in the task is by unwrapping the future (the exception will be actually wrapped in a java.util.concurrent.ExecutionException when the FutureTask.get is called). Now since the future is unwrapped in the AsyncItemWriter, you can use an ItemWriteListener and react to processing errors. You can find a complete example here.
We are using S4 SDK's CloudLoggerFactory to log exceptions throughout our application. For a class "SampleClass", we create a logger like this:
private static final Logger logger = CloudLoggerFactory.getSanitizedLogger(SampleClass.class, "(END)");
and call it for an Exception e:
logger.error(e.getMessage(), e);
A Veracode scan has shown this logging line to be vulnerable to CLRF Injection. To my understanding, the getSanitizedLogger in conjunction with the "(END)" argument should solve this issue. Can you provide some insight into this matter, please?
Thank you in advance!
Actually we plan to remove the log sanitizing feature in the upcoming major release.
We have come to the conclusion that it actually gives a false sense of security and that it should be addressed on the logger implementation level instead, which we cannot do on SDK level as we only rely on the Slf4j abstraction.
(Disclaimer: I'm one of the SAP Cloud SDK developers.)
Update: As Sander mentioned in his answer below we dropped the CloudLoggerFactory starting with version 3.0.0 of the SAP Cloud SDK.
Our reasoning behind this is that we cannot change the used Logger implementation of every library our consumers might use in their application. This means we are not able to add the token mentioned below to all log messages of the consumer, which reduces its effectiveness tremendously.
Therefore we decided to drop the CloudLoggerFactory and advise the consumer to configure his logging implementation in a such way, that this token is automatically added. On this level it is possible to have this token at the end of every log message, allowing for automated tests on forged logs.
What the sanitized logger is supposed to do is making log forging identifiable. To allow this it does the following:
This logger has your provided class (SampleClass.class in your case) as the logger name. This name will be placed in the printed output depending on the configuration of your logger implementation. This is the default behavior of SLF4J.
Add (END OF LOG ENTRY) (or your provided token) at the end of every log message created with this logger. If this token is encountered in your log message it is replaced with (MESSAGE MIGHT BE FORGED!), as that would be an indicator that some input tried to tamper with your log messages.
Both of these properties allow you to identify whether a log message is actually valid or was created via Log Forging.
To see that have a look at the following example, at first with the "unsanitized" logger:
final Logger logger = CloudLoggerFactory.getLogger(SampleClass.class);
logger.error("Some valid first message");
logger.info("Something still valid\n[main] ERROR very.important.class Major Database Error!");
logger.error("Some valid last message");
On my machine the output of this looks like
[main] ERROR com.sap.sandbox.SampleClass - Some valid first message
[main] INFO com.sap.sandbox.SampleClass - Something still valid
[main] ERROR very.important.class Major Database Error!
[main] ERROR com.sap.sandbox.SampleClass - Some valid last message
So there is no chance to identify that something is wrong with those messages.
Therefore, if you use CloudLoggerFactory.getSanitizedLogger instead of CloudLoggerFactory.getLogger you get the following log output:
[main] ERROR com.sap.sandbox.SampleClass - Some valid first message (END OF LOG ENTRY)
[main] INFO com.sap.sandbox.SampleClass - Something still valid
[main] ERROR very.important.class Major Database Error! (END OF LOG ENTRY)
[main] ERROR com.sap.sandbox.SampleClass - Some valid last message (END OF LOG ENTRY)
Here you can see that one of the messages from the SampleClass, which should actually end with the token, ends without one. Therefore you can deduce that there is some error in the log and you need to investigate this issue further.
So much for the Log Forging aspect, which is the actual attack the sanitized logger makes identifiable.
Regarding the CLRF injection issue: This issue heavily depends on the further usage of the created log output:
If you store the log messages in a database there needs to be some way to prevent SQL injection.
If you watch the log files with a web-based log analyzer there needs to be some way to prevent XSS.
...
If we would escape all of those potential use case it would make actually just reading the log files with an editor, which is imo the most common use case, much more complicated.
So you would need to decide whether for your case this is an actual issue or just a false positive.
Another point is that also all your other dependencies would need to escape their log messages for your use case. This means an easier and overarching solution would be to configure that on the actual logger implementation, e.g. for Logback: https://logback.qos.ch/manual/layouts.html#replace.
I am using chronicle queue v4 for writing serialized object to queue. But I keep getting below Exception
Exception in thread "CLF-1" java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents.
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:228)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:28)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:298)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:232)
at net.openhft.chronicle.wire.MarshallableOut.writeDocument(MarshallableOut.java:68)
This is how my code looks
SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(queueFullPath).build();
ExcerptAppender queueWriter = queue.acquireAppender();
UserStat stat=new UserStat();//this is my object
byte[] bytes = convertObjectToBytes(stat);//custom serialization to convert java object to byte array
queueWriter.writeDocument(w -> w
.getValueOut().bytes(bytes));
There is nothing written to .cq4 file. But I see the last modified time changed everytime writeDocument() method is called.
Most likely (according to the stack trace) the file you're writing to is damaged. You need to clean it up and retry (and it seems you were using fairly old version). Try to test with new version of Chronicle Queue - chances are high it is solved.
I am trying to see the execution status of a job I ran, but at some random points I get the following error:
2015-10-14T14:41:24-0400 1.2.0.RELEASE ERROR qtp195949131-28 rest.RestControllerAdvice - Caught exception while handling a request
org.springframework.http.converter.HttpMessageNotWritableException: Could not write content: java.lang.Integer cannot be cast to java.lang.String (through reference chain: org.springframework.xd.rest.domain.JobExecutionInfoResource["jobExecution"]->org.springframework.batch.core.JobExecution["executionContext"]->org.springframework.batch.item.ExecutionContext["values"]->java.util.concurrent.EntrySetView[0]->java.util.concurrent.MapEntry["value"]->java.util.ArrayList[0]); nested exception is com.fasterxml.jackson.databind.JsonMappingException: java.lang.Integer cannot be cast to java.lang.String (through reference chain: org.springframework.xd.rest.domain.JobExecutionInfoResource["jobExecution"]->org.springframework.batch.core.JobExecution["executionContext"]->org.springframework.batch.item.ExecutionContext["values"]->java.util.concurrent.EntrySetView[0]->java.util.concurrent.MapEntry["value"]->java.util.ArrayList[0])
Now, I say "random", but the truth is I don't even know which step causes this exception since those are the only logs I have. The jobs run successfully with seemingly no errors, but this really worries me. I've been looking online for days for this, but I don't see anything that can either help me debug this, or even gives an inkling of what might cause this. Any help?
Thanks, N.S.
Okay, so I figured out that the problem was that we were serializing in the execution context a List>, and one of those internal maps contained an Integer as an Object instead of a String. This seems to cause the deserialization of the context to crash.
Solution? Don't store that list within the execution context (instead we wrote the whole list object to a file for transferring between various steps).