performance impact of long list of catch c++ - c++11

I want to create a method that can tell me if an exception is retry-able or not. Exceptions are from 3rd party library. So i have something like this :
bool isRetryable ( std::exception_ptr ex) {
try {
if(ex) {
std::rethrow_exception (ex);
}
return true;
} catch (const ExceptionA& ex) {
return true;
} catch (const ExceptionB& ex) {
return true;
} catch (const ExceptionC& ex) {
return true;
} catch (const ExceptionD& ex) {
return false;
} catch (const ExceptionE& ex) {
return false;
} catch (const ExceptionF& ex) {
return false;
}
}
The list of Exceptions can go as long as 100, So my question is - what is the impact of a long list of exception and is there any other way to accomplish the same goal ?

It depends on the C++ runtime library, but in general having a long list of catch blocks will result in O(n) performance as the implementation performs a linear scan, matching the thrown exception type against each of the types in the list of catch blocks in turn; this is e.g. how libstdc++ (the default C++ runtime on Linux) works.
I don't know whether any other runtime libraries optimize this, but I'd suspect they wouldn't, as a long list of exception blocks would usually be considered a "code smell" and also since the implementation is required to ensure that the first matching catch block receives the exception, which makes a linear scan the natural implementation.
However, if all your exceptions derive from a polymorphic base (e.g. std::exception) there is a much more elegant solution: you can take the typeid of the exception, construct a std::type_index and look it up in a container:
struct ExceptionA : std::exception {};
// etc.
static std::unordered_set<std::type_index> const retryables{
typeid(ExceptionA),
typeid(ExceptionB),
typeid(ExceptionC)
};
bool isRetryable ( std::exception_ptr ex) {
try {
if(ex) {
std::rethrow_exception (ex);
}
return true;
} catch (const std::exception& ex) {
return retryables.count(typeid(ex));
}
}
Example.

Related

Expecting to catch an Aggregate Exception

I am trying to understand exception handling in TPL Dataflow so I can effectively handle errors. In my comment numbered 1. below I am expecting to catch an AggregateException but everything just halts and doesn't recover. If I remove the throw (2.) then the ActionBlock continues to process but again, the AggregateException handler doesn't trigger.
Would anyone be able to help with an explanation to improve my intuition.
Would also welcome any documentation references on the topic.
async Task Main()
{
var ab = new System.Threading.Tasks.Dataflow.ActionBlock<int>(async a => {
try
{
await Task.Delay(100);
if (a == 7)
{
throw new Exception("Failed");
}
else
{
Console.WriteLine(a);
}
}
catch (Exception ie)
{
Console.WriteLine(ie.Message);
throw; //2. This causes the actionblock to halt, removing allows block to continue
}
});
for (int i = 0; i < 10; i++)
{
await ab.SendAsync(i);
}
ab.Complete();
try
{
await ab.Completion;
}
catch (AggregateException ae)
{
Console.WriteLine(ae.Flatten().Message);
// 1. Expecting to catch here.
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
What you're seeing is the await unwrapping your Aggregate Exception. When you await the completion task the exception is unwrapped and thrown to the general exception catch. But if you don't unwrap the exception then you'd see the exception caught as an aggregate exception like this:
try
{
ab.Completion.Wait();
}
catch (AggregateException ae)
{
Console.WriteLine("Aggregate Exception");
// 1. Expecting to catch here.
}
catch (Exception e)
{
Console.WriteLine("Exception Caught");
}
It's obviously better to properly await the completion but this samples shows you that indeed an AggregateExcpetion is caught when it's not unwrapped.

why we use multiple catch when we can handle with parent exception

Exception handling provides the smooth flow of application but when we come to the technical part it's difficult to understand why we should use multiple catch blocks.When we can handle the exception by general parent exception using one catch block so why we need multiple catch
A multiple catch block is useful when you want to handle different exceptions in different ways.
try
{
int num1 = int.Parse(Console.ReadLine());
int num2 = int.Parse(Console.ReadLine());
int result = num1 / num2;
}
catch (DivideByZeroException ex)
{
Console.Write("Cannot divide by zero");
}
catch (InvalidOperationException ex)
{
Console.Write("Not a valid number");
}
catch (FormatException ex)
{
Console.Write("Not a valid number");
}
When use multiple catch blocks
To print message specific to an exception
To execute some logic specific to an exception

Java 8 Path Stream and FileSystemException (Too many open files)

geniuses!
I'm practicing Java 8.
So if I do something like this:
Files.walk(Paths.get(corpusPathStr))
.filter(path -> path.toFile().isFile())
.forEach(path -> {
try {
Files.lines(path)
.forEach(...);
} catch (IOException e) {
e.printStackTrace();
}
});
I got FileSystemException error.
If I open a file under forEach, may too many files be opened?
Or are there other reasons causing FileSystemException (Too many open files)?
Thanks for your help in advance!
Use
try(Stream<Path> stream = Files.walk(Paths.get(corpusPathStr))) {
stream.filter(path -> Files.isRegularFile(path) && Files.isReadable(path))
.flatMap(path -> {
try { return Files.lines(path); }
catch (IOException e) { throw new UncheckedIOException(e); }
})
.forEach(...);
}
catch(UncheckedIOException ex) {
throw ex.getCause();
}
The streams returned by Files.walk and Files.lines must be properly closed to release the resources, which you do by either, a try(…) construct or returning them in the mapping function of a flatMap operation.
Don’t use nested forEachs.
The UncheckedIOException might not only be thrown by our mapping function, but also the stream implementations. Translating it back to an IOException allows to treat them all equally.
Files::line opens and reads the file in a lazy manner, i.e. Stream<String>. Since you're not closing any of your opened Streams you're getting such an error.
So when you're done reading a file you should close its handle. Since the returned Stream is AutoCloseable you can and should use a try-with-resource block.
try (Stream<Path> walk = Files.walk(Paths.get(""))) {
walk.filter(Files::isRegularFile).forEach(path -> {
try (Stream<String> lines = Files.lines(path)) {
lines.forEach(System.out::println);
} catch (IOException e) {
e.printStackTrace();
}
});
} catch (IOException e) {
e.printStackTrace();
}

Debugger always Break when invoke GetFileAsync in windows 8 store apps (metro style)

This problem confuse me several days, when the code run to
await ApplicationData.Current.LocalFolder.GetFileAsync(udFileName);
The app was always jump to
UnhandledException += (sender, e) =>
{
if (global::System.Diagnostics.Debugger.IsAttached) global::System.Diagnostics.Debugger.Break();
};
The Exception e is:
{Windows.UI.Xaml.UnhandledExceptionEventArgs}
Exception {"Object reference not set to an instance of an object."}
System.Exception {System.NullReferenceException}
Message "System.NullReferenceException.......
Following is the function I invoked:
public async void RestoreUserDefaults()
{
string udFileName = "userdefaults.udef";
bool bExist = true;
{
try
{
await ApplicationData.Current.LocalFolder.GetFileAsync(udFileName);
}
catch (FileNotFoundException)
{
bExist = false;
}
}
}
I had add the file type to the package.appxmanifest.
anyone can help me, so many thanks.....

Unable to func-eval using Mdbg due to "code is optimized" exception

We are using MdbgCore.dll to evaluate a property off a parameter on a thread callstack.
To do this, we are performing a func-eval.
Unfortunately, all our attempts to perform the func-eval are failing with CORDBG_E_ILLEGAL_IN_OPTIMIZED_CODE, which seems to be due to the thread being used for the func-eval not being in a GC-safe point.
This is documented here: http://blogs.msdn.com/b/jmstall/archive/2005/11/15/funceval-rules.aspx.
We tried scanning all threads in the process to find a thread that is in a GC-safe point, but they all appear to have UserState marked with USER_UNSAFE_POINT.
There is very scarce documentation on the subject, and we are pulling our hair out trying to figure out if there is a way to get a thread in a GC-safe point so we can do the func-eval. We would consider anything that allows us to deterministically break into the process with a thread to do the func-eval with.
Disclaimer: we are trying to evaluate a method on a class that resides in an optimized assembly, so not sure if this is maybe also causing an issue.
The sample code follows:
if (argument.TypeName.EndsWith(
"WorkerRequest", StringComparison.OrdinalIgnoreCase)
&& !argument.IsNull)
{
try
{
// Invoke the "GetUriPath()" function to obtain the URI
string functionName = "System.Web.HttpWorkerRequest.GetUriPath";
MDbgFunction func = debugger.Processes.Active.ResolveFunctionNameFromScope(
functionName,
thread.CorThread.AppDomain
);
if (null == func)
{
throw new InvalidOperationException(
String.Format("Could not resolve {0}", functionName));
}
// Setup the eval
CorEval eval = threadForFuncEvals.CorThread.CreateEval();
// Setup the function parameters
List<CorValue> values = new List<CorValue>();
// Add the worker request "this" pointer
values.Add(
argument.CorValue
);
// resume the thread being used to do the func-eval
threadForFuncEvals.CorThread.DebugState = CorDebugThreadState.THREAD_RUN;
// Queue the function for execution
// EXCEPTION THROWN BELOW
// EXCEPTION THROWN BELOW
// EXCEPTION THROWN BELOW
eval.CallFunction(func.CorFunction, values.ToArray());
// BUGBUG: Should we pause all other threads to prevent them from moving?
// Continue the process to execute the function
if (!proc.Go().WaitOne(settings.BreakTimeout))
{
throw new InvalidOperationException("Timeout while evaluating function");
}
// get the returned string
var result = eval.Result;
if (result != null)
{
MDbgValue mv = new MDbgValue(proc, result);
string returnedValue = mv.GetStringValue(false);
threadInfo.Url = returnedValue;
}
}
catch (Exception e)
{
// BUGBUG: Ignoring exception
}
finally
{
// suspend the thread again
if (threadForFuncEvals != null)
{
threadForFuncEvals.CorThread.DebugState =
CorDebugThreadState.THREAD_SUSPEND;
}
}
}
Microsoft / Mdbg team, can you help?
Best,
Mike
Is this something to do with JIT optimization?
In my program, I turn JIT optimization off (for technical reasons, I think you can only do this with CreateProcess() and not using Attach()).
proc = m_Debugger.CreateProcess(ProcessName, ProcessArgs, DebugModeFlag.Default, DebugEngineUtils.GetAssemblyRuntimeVersion(ProcessName,DefaultNetVersion));
if (proc!=null) proc.CorProcess.OnCreateProcess += new Microsoft.Samples.Debugging.CorDebug.CorProcessEventHandler(CorProcess_OnCreateProcess);
if (proc!=null) proc.CorProcess.OnModuleLoad += new Microsoft.Samples.Debugging.CorDebug.CorModuleEventHandler(CorProcess_OnModuleLoad);
void CorProcess_OnModuleLoad(object sender, Microsoft.Samples.Debugging.CorDebug.CorModuleEventArgs e)
{
e.Module.JITCompilerFlags = Microsoft.Samples.Debugging.CorDebug.CorDebugJITCompilerFlags.CORDEBUG_JIT_DISABLE_OPTIMIZATION;
}
void CorProcess_OnCreateProcess(object sender, Microsoft.Samples.Debugging.CorDebug.CorProcessEventArgs e)
{
//try to disable optimization
((Microsoft.Samples.Debugging.CorDebug.CorProcess)sender).DesiredNGENCompilerFlags = Microsoft.Samples.Debugging.CorDebug.CorDebugJITCompilerFlags.CORDEBUG_JIT_DISABLE_OPTIMIZATION;
}

Resources