.ToList throwing exception - linq

I have a sorted dictionary, which stores some data. Every two minutes or so I do the following:
sorteddcitionary.Values.ToList() ----> This line throw exception sometimes. It is not consistent. The exception is as follows:
System.IndexOutOfRangeException: Index was outside the bounds of the array.
at System.Collections.Generic.SortedDictionary`2.ValueCollection.<>c__DisplayClass11.CopyTo>b__10(Node node)
at System.Collections.Generic.TreeSet`1.InOrderTreeWalk(TreeWalkAction`1 action)
at System.Collections.Generic.SortedDictionary`2.ValueCollection.CopyTo(TValue[] array, Int32 index)
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at Aditi.BMCMUtility.UnModCacheMaster.MessageAddition(PrivateMessage message, Nullable`1 cnt)
Any idea why this exception is coming from .Net LINQ code ?
It is within a lock. I am posting the code snippet. (This is after bzlm's response.)
try
{
lock (locker)
{
MessageCache.Add(message.MessageID, message);
privMsgList = MessageCache.Values.ToList(); /// Line which throws exception
while (MessageCache.Count > cnt.Value)
{
MessageCache.Remove((MessageCache.First().Key));
}
}
}
catch(Exception ex)
{}

It could be that the collection is being modified at the same time as it is being iterated. If this code is accessed by more than one thread simultaneously, you might need to lock the list for modifications while iterating it.

Locking during the ToList is not sufficient. You must also lock (the same object!) during the modification.

Related

Why freemarker.cache.TemplateCache has storeNegativeLookup?

I have template_update_delay=24h for caching the templates for 24 hours. If my URLTemplateLoader gets IOException due to temporary outage(http status 429) then freemarker.cache.TemplateCache will call storeNegativeLookup and cached the exception too.
cachedTemplate.templateOrException = e
// Template source was removed
if (!newLookupResult.isPositive()) {
if(debug) {
LOG.debug(debugName + " no source found.");
}
storeNegativeLookup(tk, cachedTemplate, null);
return null;
}
private void storeNegativeLookup(TemplateKey tk,
CachedTemplate cachedTemplate, Exception e) {
cachedTemplate.templateOrException = e;
cachedTemplate.source = null;
cachedTemplate.lastModified = 0L;
storeCached(tk, cachedTemplate);
}
Later even if the URL endpoint is up and available, freemarker.cache.TemplateCache:getTemplate() will keep picking the cachedTemplate with the IOException and will keep rethrowing the exception till the cache is not expired.
else if(t instanceof IOException) {
rethrown = true;
throwLoadFailedException((IOException)t);
}
This is causing application of fail all the time ((.
How can I force Freemarker to retry fetching the template from the source instead of cache if there was an exception happened last time?
Failed lookups are cached for the same reason successful ones are. A failed lookup can take as much or even more time (if for example a connection timeout is involved) as a successful one, and so can jam the application if it's for a frequently used template (by consuming the whole thread pool for example).
The problem is that there's only a single templateUpdateDelay. But it's possible that you want to use a different one depending what the cause of the error was. You should open a feature request for that on Jira.
What can you do right now though. You can catch the exception thrown by Configuration.getTemplate, and walk the cause trace to find out if the root cause is an exception that you don't want to cache, and then call Configuration.removeTemplateFromCache (but consider if you should only do it once in a certain time interval). I'm not sure if the exception with HTTP 429 can be recognized reliably out-of-the-box, or you have to customize the TemplateLoader so that the thrown exception contains the necessary information.

Spring batch paginated reader and Exception handling

We created a custom item reader which extends AbstractPaginatedDataItemReader. Spring-batch allows to manage which exception stops or not a job (skipped exceptions).
In "classic" spring-batch readers, the doRead method throws any Exception. That means, if a skipped exception is thrown during the read, the item is skipped and the job continues running.
But in paginated readers, the doPageRead method, used to retrieve next data page, doesn't throw any exception:
protected abstract Iterator<T> doPageRead();
The doPageRead method is called by the doRead one:
protected T doRead() throws Exception {
synchronized (lock) {
if(results == null || !results.hasNext()) {
results = doPageRead();
page ++;
if(results == null || !results.hasNext()) {
return null;
}
}
if(results.hasNext()) {
return results.next();
}
else {
return null;
}
}
}
As doPageRead method doesn't declare any thrown exception, that means a configured skipped exception can only be a RuntimeException?
Thanks
A Spring Batch reader is eventually an ItemReader irrespective of it being a paging reader or a non - paging reader. Which eventually means that it will hand over single - single items to processor and read() method contract is all that matters.
Paging readers simply have an optimization about how they actually read an item but no different than than a regular non - paging reader.
So in my opinion, your look out for doReadPage() method seems unnecessary, what matters is read() method contract.
If you are facing any issue ( and that is not clear from your question ) , do let me know.

How to create a custom Sonar rule to check if a method throws a certain exception?

So we want to check a certain method, say findOne() in certain java classes if it throws a specific exception or not. If it doesn't throw the exception, then an issue to be reported at method level.
We could use
public void visitThrowStatement(ThrowStatementTree tree)
but this only gets called when there is a statement that throws the exception, how can we check if it's not thrown?
You need to keep a context in your visitor to know in which method you are currently visiting throw statements.
Basically, if you are within a findOne method, then you will visit the code of the method, if it has a correct throw statement,then don't raise an issue but if it has not then raise an issue.
Something along the lines of (this is pseudo code and should of course be adapted but that will explain the concept):
LinkedList<MethodTree> stack;
int throwCount = 0;
void visitMethod(MethodTree methodTree) {
stack.push(methodTree);
throwCount = 0;
super.visitMethod(methodTree);
if(throwCount == 0) {
//raise Issue
}
}
void visit throwStatement(ThrowStatementTree tree) {
if(isCorrectExceptionThrown(tree)) {
throwCount++;
}
}

how to prevent hadoop job to fail on corrupted input file

I'm running hadoop job on many input files.
But if one of the files is corrupted the whole job is fails.
How can I make the job to ignore the corrupted file?
maybe write for me some counter/error log but not fail the whole job
It depends on where your job is failing - if a line is corrupt, and somewhere in your map method an Exception is thrown then you should just be able to wrap the body of your map method with a try / catch and just log the error:
protected void map(LongWritable key, Text value, Context context) {
try {
// parse value to a long
int val = Integer.parseInt(value.toString());
// do something with key and val..
} catch (NumberFormatException nfe) {
// log error and continue
}
}
But if the error is thrown by your InputFormat's RecordReader then you'll need to amend the mappers run(..) method - who's default implementation is as follows:
public void run(Context context) {
setup(context);
while (context.nextKeyValue()) {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
cleanup(context);
}
So you could amend this to try and catch the exception on the context.nextKeyValue() call but you have to be careful on just ignoring any errors thrown by the reader - an IOExeption for example may not be 'skippable' by just ignoring the error.
If you have written your own InputFormat / RecordReader, and you have a specific exception which denotes record failure but will allow you to skip over and continue parsing, then something like this will probably work:
public void run(Context context) {
setup(context);
while (true) {
try {
if (!context.nextKeyValue()) {
break;
} else {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
} catch (SkippableRecordException sre) {
// log error
}
}
cleanup(context);
}
But just to re-itterate - your RecordReader must be able to recover on error otherwise the above code could send you into an infinite loop.
For your specific case - if you just want to ignore a file upon the first failure then you can update the run method to something much simpler:
public void run(Context context) {
setup(context);
try {
while (context.nextKeyValue()) {
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
cleanup(context);
} catch (Exception e) {
// log error
}
}
Some final words of warning:
You need to make sure that it isn't your mapper code which is causing the exception to be thrown, otherwise you'll be ignoring files for the wrong reason
GZip compressed files which are not GZip compressed will actually fail in the initialization of the record reader - so the above will not catch this type or error (you'll need to write your own record reader implementation). This is true for any file error that is thrown during record reader creation
This is what Failure Traps are used for in cascading:
Whenever an operation fails and throws an exception, if there is an associated trap, the offending Tuple is saved to the resource specified by the trap Tap. This allows the job to continue processing without any data loss.
This will essentially let your job continue and let you check your corrupt files later
If you are somewhat familiar with cascading in your flow definition statement:
new FlowDef().addTrap( String branchName, Tap trap );
Failure Traps
There is also another possible way. You could use mapred.max.map.failures.percent configuration option. Of course this way of solving this problem could also hide some other problems occurring during map phase.

About Spring Transaction Manager

Currently i am using spring declarative transaction manager in my application. During DB operations if any constraint violated i want to check the error code against the database. i mean i want to run one select query after the exception happened. So i am catching the DataIntegrityViolationException inside my Catch block and then i am trying to execute one more error code query. But that query is not get executed . I am assuming since i am using the transaction manager if any exception happened the next query is not getting executed. Is that right?. i want to execute that error code query before i am returning the results to the client. Any way to do this?
#Override
#Transactional
public LineOfBusinessResponse create(
CreateLineOfBusiness createLineOfBusiness)
throws GenericUpcException {
logger.info("Start of createLineOfBusinessEntity()");
LineOfBusinessEntity lineOfBusinessEntity =
setLineOfBusinessEntityProperties(createLineOfBusiness);
try {
lineOfBusinessDao.create(lineOfBusinessEntity);
return setUpcLineOfBusinessResponseProperties(lineOfBusinessEntity);
}
// Some db constraints is failed
catch (DataIntegrityViolationException dav) {
String errorMessage =
errorCodesBd.findErrorCodeByErrorMessage(dav.getMessage());
throw new GenericUpcException(errorMessage);
}
// General Exceptions handling
catch (Exception exc) {
logger.debug("<<<<Coming inside General >>>>");
System.out.print("<<<<Coming inside General >>>>");
throw new GenericUpcException(exc.getMessage());
}
}
public String findErrorCodeByErrorMessage(String errorMessage)throws GenericUpcException {
try{
int first=errorMessage.indexOf("[",errorMessage.indexOf("constraint"));
int last=errorMessage.indexOf("]",first);
String errorCode=errorMessage.substring(first+1, last);
//return errorCodesDao.find(errorCode);
return errorCode;
}
catch(Exception e)
{
throw new GenericUpcException(e.getMessage());
}
}
Please help me.
I don't think problem you're describing has anything to do with Transaction management. If DataIntegrityViolationException happens within your try() block you code within catch() should execute. Perhaps exception different from DataIntegrityViolationException happens or your findErrorCodeByErrorMessage() throwing another exception. In general, Transaction logic would be applied only once you return from your method call, until then you could do whatever you like using normal Java language constructs. I suggest you put breakpoint in your error error handler or some debug statements to see what's actually happening.

Resources