What is the point of putting asserts into our code ? What are the benefits of assertive programming ?
private void WriteMessage(string message)
{
Debug.Assert(message != null, "message is null");
File.WriteAllText(FILE_PATH, message);
}
For example we can check the message variable and throw an exception here. Why do I use assert here ? Or is this a wrong example to see benefits of asserts ?
They also support the philosophy of fail fast, explained in this article by Jim Shore.
Where some people would write:
/*
* This can never happen
*/
Its much more practical to write:
assert(i != -1);
I like using asserts because they are easily turned off with a simple compile time constant, or made to do something else like prepare an error report. I don't usually leave assertions turned on (at least, not in the usual way) when releasing something.
Using them has saved me from making very stupid mistakes on other people's computers .. those brave souls who enjoy testing my alpha code. Using them plus tools like valgrind helps to guarantee that I catch something awful before committing it.
An important distinction to consider is what sorts of errors you would like to catch with assertions. I often use assertions to catch programming errors (i.e., calling a method with a null parameter) and a different mechanism to handle validation errors (e.g., passing in a social security number of the wrong length). For the programming error caught with the assertion, I want to fail fast. For the validation error, I want to respond less drastically because it may be normal for there to be errors in the data (e.g. a user doing data entry of some sort). In those cases the proper handling might be to report the error back to the user and continue functioning.
If there is a specified pre-condition for the method to take a non-null message parameter, then you want the program to FAIL as soon as the pre-condition doesn't hold and the source of the bug must then be fixed.
I believe assertions are more important when developing safety-critical software. And certainly you would rather use assertions when the software has been formally specified.
For an excellent discussion of assertions (and many other topics related to code construction), check out Code Complete by Steve McConnel. He devotes an entire chapter to effective use of assertions.
I use them to verify that I have been supplied valid dependent class. for exmaple in constructor DI, you usually are accepting some sort of external class that you are dependent on to provide some action or service.
so you can assert(classRef !- null,"classRef cannot be null"); instead of waiting for you to pass a message to classRef and getting some other exception such as exception : access violation or something equally ambiguous that might not be immediately obvioius from looking at the code.
It is very useful to test our assumptions. The asserts constantly make sure that the invariant holds true. In short it is used for the following purposes,
It allows to implement fail-fast system.
It reduces the error propagation because of side-effects.
It forbids(kind of sanity-check) the system to enter an inconsistent state because of user-data or by subsequent code changes.
Sometimes, I favor the use of assert_return() when we do not want to use try/catch/throw.
private void WriteMessage(string message)
{
assert_return(message != null, "message is null"); // return when false
File.WriteAllText(FILE_PATH, message);
}
I suggest assert_return() to halt the application by reporting an error in test build. And then in the production system, it should log and error and return from the function saying it cannot do that.
Related
The code is as follows:
class A;
shared_ptr<A> aPtr(new A());
//do something with aPtr.
If new throws a bad_alloc exception, what happend to the smart point aPtr? Do I need to do some check with aPtr, and how to do? And I know one of the Google C++ program rules is never using exceptions, but how they deal with exceptions like bad_alloc? Thank you for any replies.
If you get a bad_alloc you're pretty much screwed anyway. I'm not sure how you expect to handle the allocation failing here. Not using exceptions does not really apply in this case.
If you really want to opt out of it you can add a nothrow to that statement and it'll return nullptr instead of throwing a bad_alloc:
shared_ptr<A> aPtr(new (std::nothrow) A());
For more information see this question about the design consideration involved. Additionally see this question explainign why using std::nothrow is a bad idea.
Problem:
I have some code that is failing because an object has not been initialized. The solution for this bug is easy to fix, when detected. However, what surprised me is that my elaborate exception handling didn't catch this exception. That meant the exception wasn't logged or handled, and code following the try catch block was never executed. The try...catch block was outside of the transaction, so there was no issue there.
In this particular case, the exception was inside a batch (RunBaseBatch) job. The job handled several unrelated processing tasks. Once the exception conditions were met, the job terminated, so the other unrelated processing tasks were never called.
Does anyone know if it is possible to catch an "object not initialized" exception in Dynamics AX 2009? I read one post that said it may not be possible to catch certain exceptions in AX, however, I hope that is not the case (reference: https://community.dynamics.com/product/ax/f/33/p/16352/23700.aspx#23700).
Code example:
Here is some simplistic code that recreates the issue:
server static void main(Args args)
{
Array arr;
;
info ("debug: before try...catch");
try
{
// ttsbegin; // enable/disable to test with transactions
// arr = new Array(Types::String); // Enabling this line will prevent the exception
arr.exists(3);
// ttscommit; // enable/disable to test with transactions
}
catch (Exception::Internal) // This exception handler was the Magic Sauce!!
{
info ("debug: catch (Exception::Internal)");
}
catch (Exception::Error)
{
info ("debug: catch (Exception::Error)");
}
catch
{
info ("debug: catch");
}
info ("debug: after try...catch");
}
UPDATE 2013-01-29
I am waiting to accept an answer until this question has been viewed more. Thank you for the answers so far.
I know the example I gave was simplistic. This type of bug is easily fixable when it is known. And defensive programming is always a good idea.
However, in the real world, the code where the bug occurred was very complex. The error occurred several levels deep in an overloaded method of a subclass. It occurred in a specific scenario, when an overloaded method corrupted the protected value of a member variable from the super class. That is where the bug occurred in the code, however, it didn't manifest itself until the super class tried to use the member variable again. The bug was summarily fixed when it was detected and tracked down.
Defensively, yes you could check every protected member variable, every time you use it, but that does start to impact performance, code readability, practicality, etc., which is why languages offer exception handling.
The question here, is how can these type of bugs be caught to make code more robust and bullet-proof? In most development environments (C, C++, C#, or Java for example), a try...catch at a top level could be used to catch, log, and clean up ALL unexpected exceptions. So the code would be able to continue processing with the other unrelated tasks. AX is continuing at some level, because the whole system doesn't come to a grinding halt when this bug occurs. However, the code after the catch in this job is not executing because of what appears to be a deficiency in AX/X++.
I am looking for an innovative solution or work-around, if it exists, to catch the "object not initialized" exception (really ALL exceptions) and to continue processing.
You cannot "catch" it in the traditional sense, but you can avoid it happening. Simply test if the object exists before running anything from it:
if(object)
{
// Exists; Execute statements with object here
}
else
{
// Doesn't exist
}
This works because object will be translated as null if it is not initialized.
(Null == 0) == false
If the object is initialized it will have some value other than null.
(!Null != 0) == true
Hope that helps!
You can, but you shouldn't. A behavior like this is almost certainly a bad design of your code, that will inevitably end in more problems in the future.
You need to make your code defensive to this case, making sure the object is instanciated before using it. Otherwise, you're using the catch code to an expected behavior, wich makes no sense.
EDIT 2013/02/18
In complex scenarios like what you're describing, it's usually very hard to get a solution fully controlled. In AX, try..catch statement is quite simplified and in a very large range of situations is not really needed (unlike Java, C#, ... where is always recommended).
This simplification is nice in almost all situations of AX development, as you don't need to waste time on exception handling. Just let them raise, and the InfoLog will handle them on a simple and reliable way.
The big problem comes where you really need this control... when there is not really a way of force it. I'm not sure if this is really an standard issue or it's espected by the product team to work that way, but this cases are always giving troubles in AX. When you need to catch some specific issue you have to be very creative and deffensive to prevent the exception as catching it will become even more creative...
Hope this helps :)
To elaborate a little, as stated in the post you linked to you cannot catch an Object Not Initialized error. However, you can "fix" the code by adding a simple check before attempting to run functions against a variable that you do not control (for example, if you are requesting an Array type as an argument for a function and you expect the function to be called from outside the class).
try
{
if (arr)
arr.exists(3);
}
The if(arr) statement is enough to skip the processing if the object has not yet been instantiated, effectively bypassing the error. However, this will obviously not throw the error further up the chain. If you really wanted, you could make it throw a different error that can be caught, but obviously that is less than ideal.
In this case, since the RunBaseBatch class may not be something you want to modify it would probably be better to make sure the object that is causing the issue is correctly defined before calling the problem method, and finding these errors in testing.
I've seen it written elsewhere on SO that while the Enterprise Library Validation Application Block is geared towards validating user inputs, Code Contracts are meant to prevent programmer errors. Would you support this opinion? Why?
Yes.
Code contracts are meant to keep a strict programming interface, which only a developer can get right or wrong; a user shouldn't really be able to mess this up.
Validation is meant to validate the data; e.g. verifying data isn't null, or matches a regex.
Code contracts throw exceptions when they are violated. Invalid user input is not an exceptional condition so validation functions should generally not throw exceptions. That's why methods like TryParse were added to the Framework (the original Framework didn't have them, and it made validation cumbersome because of all the possible exceptions).
Code contracts are used to assert things that will always be true, and if they're not true, then there's a bug in the code. That means it can only apply to conditions that are controlled by code. So, you can't use them to state "the user will never supply an empty string", because that's outside of the control of the code. The static verifier will never be able to prove that statement - how can it know what the user will do?
What you can do is make statements like "Given a user input, the method will either return a non-empty string or throw an exception".
I create class libraries, some which are used by others around the world, and now that I'm starting to use Visual Studio 2010 I'm wondering how good idea it is for me to switch to using code contracts, instead of regular old-style if-statements.
ie. instead of this:
if (fileName == null)
throw new ArgumentNullException("fileName");
use this:
Contract.Requires(fileName != null);
The reason I'm asking is that I know that the static checker is not available to me, so I'm a bit nervous about some assumptions that I make, that the compiler cannot verify. This might lead to the class library not compiling for someone that downloads it, when they have the static checker. This, coupled with the fact that I cannot even reproduce the problem, would make it tiresome to fix, and I would gather that it doesn't speak volumes to the quality of my class library if it seemingly doesn't even compile out of the box.
So I have a few questions:
Is the static checker on by default if you have access to it? Or is there a setting I need to switch on in the class library (and since I don't have the static checker, I won't)
Are my fears unwarranted? Is the above scenario a real problem?
Any advice would be welcome.
Edit: Let me clarify what I mean.
Let's say I have the following method in a class:
public void LogToFile(string fileName, string message)
{
Contracts.Requires(fileName != null);
// log to the file here
}
and then I have this code:
public void Log(string message)
{
var targetProvider = IoC.Resolve<IFileLogTargetProvider>();
var fileName = targetProvider.GetTargetFileName();
LogToFile(fileName, message);
}
Now, here, IoC kicks in, resolves some "random" class, that provides me with a filename. Let's say that for this library, there is no possible way that I can get back a class that won't give me a non-null filename, however, due to the nature of the IoC call, the static analysis is unable to verify this, and thus might assume that a possible value could be null.
Hence, the static analysis might conclude that there is a risk of the LogToFile method being called with a null argument, and thus fail to build.
I understand that I can add assumptions to the code, saying that the compiler should take it as given that the fileName I get back from that method will never be null, but if I don't have the static analyzer (VS2010 Professional), the above code would compile for me, and thus I might leave this as a sleeping bug for someone with Ultimate to find. In other words, there would be no compile-time warning that there might be a problem here, so I might release the library as-is.
So is this a real scenario and problem?
When both your LogToFile and Log methods are part of your library, it is possible that your Log method will not compile, once you turn on the static checker. This of course will also happen when you supply code to others that compile your code using the static checker. However, as far as I know, your client's static checker will not validate the internals of the assembly you ship. It will statically check their own code against the public API of your assembly. So as long as you just ship the DLL, you'd be okay.
Of course there is a change of shipping a library that has a very annoying API for users that actually have the static checker enabled, so I think it is advisable to only ship your library with the contract definitions, if you tested the usability of the API both with and without the static checker.
Please be warned about changing the existing if (cond) throw ex calls to Contracts.Requires(cond) calls for public API calls that you have already shipped in a previous release. Note that the Requires method throws a different exception (a RequiresViolationException if I recall correctly) than what you'd normally throw (a ArgumentException). In that situation, use the Contract.Requires overload. This way your API interface stays unchanged.
First, the static checker is really (as I understand it) only available in the ultimate/academic editions - so unless everyone in your organization uses it they may not be warned if they are potentially violating an invariant.
Second, while the static analysis is impressive it cannot always find all paths that may lead to violation of the invariant. However, the good news here is that the Requires contract is retained at runtime - it is processed in an IL-transformation step - so the check exists at both compile time and runtime. In this way it is equivalent (but superior) to a regular if() check.
You can read more about the runtime rewriting that code contract compilation performs here, you can also read the detailed manual here.
EDIT: Based on what I can glean from the manual, I suspect the situation you describe is indeed possible. However, I thought that these would be warninings rather than compilation errors - and you can suppress them using System.Diagnostics.CodeAnalysis.SuppressMessage(). Consumers of your code who have the static verifier can also mark specific cases to be ignored - but that could certainly be inconvenient if there are a lot of them. I will try to find some time later today to put together a definitive test of your scenario (I don't have access to the static verifier at the moment).
There's an excellent blog here that is almost exclusively dedicated to code contracts which (if you haven't yet seen) may have some content that interests you.
No; the static analyzer will never prevent compilation from succeeding (unless it crashes!).
The static analyzer will warn you about unproven pre-/post-conditions, but doesn't stop compilation.
In COM when I have a well-known interface that I can't change:
interface IWellKnownInterface {
HRESULT DoStuff( IUnknown* );
};
and my implementation of IWellKnownInterface::DoStuff() can only work when the passed object implements some specific interface how do I handle this situation?
HRESULT CWellKnownInterfaceImpl::DoStuff( IUnknown* param )
{
//this will QI for the specific interface
ATL::CComQIPtr<ISpecificInterface> object( param );
if( object == 0 )
//clearly the specifil interface is not supported
return E_INVALIDARG;
}
// proceed with implementation
}
In case the specific interface is not supported which error code should I return? Is returning E_INVALIDARG appropriate?
E_INVALIDARG is a fine choice, but the most important thing is to ensure that the precise conditions for each return code that you use are well documented.
Additionally, you could consider implementing ISupportErrorInfo and then returning rich error information via CreateErrorInfo and SetErrorInfo. This is especially useful in cases where you think callers may benefit from having a custom error message generated at the point of failure, with all of the relevant context contained therein. In your case, this might be to identify specifically which argument is invalid and which interface was unimplemented for it to be so. Even though such a message is unlikely to be of value to an end user, it could be invaluable to a developer if it shows up in a log file or the event viewer.
Yes. At least, that is what I've learned to expect a microsoft API call to return in such a case.
I beg to differ with #nobugz. E_NOINTERFACE has the very specific meaning that your object does not support a certain interface that your client has just requested.
In general, you should only return E_NOINTERFACE from your own IUnknown::QueryInterface(). Returning E_NOINTERFACE from DoStuff() would surprise end users and tools alike. If I saw E_NOINTERFACE coming from anywhere other than QueryInterface(), my immediate thought would be "abstraction leak!".
At first sight, E_INVALIDARG looks like the best options available -- after all, you were passed an argument that doesn't work for you. But in my view there is a subtlety here, so I'm not sure if I can make a universal rule out of that.
In an ideal world, the error codes returned by a COM method are more about the Interface than about the object. Concretely, IWellKnownInterface::DoStuff() doesn't define (necessarily) that the object being passed in must implement ISpecificInterface, or it would have a different argument type. So, technically you're cheating potential users of your object by not quite fully implementing the semantics of your interface. Of course, in the real world, it's very hard to create a meaningful open system if you make that into an absolute rule and interfaces are that rigid.
So: if you can provide reduced functionality that still meets the interface semantics even if your argument doesn't implement ISpecificInterface, you should consider doing that.
Assuming that you must return in error: if this is a well-know, well documented (well designed?) interface, I would probably glance at the interface documentation first, to see if they give you any guidance. Lacking that, I would probably go ahead and use E_INVALIDARG.
#Phil Booth has some nice recommendations about how to provide more information to your user about the nature of the error.
Obviously, if you had the chance you would want to change the interface to take a ISpecificInterface* param instead.
Here's another possibilty you might or might not like better.
In Visual Basic 6, there are many objects with methods that take a VARIANT argument and look for several alternative interfaces they can use (e.g. the DataSource member of data-bound controls). That looks a lot like your situation. Those objects return DISP_E_TYPEMISMATCH, which is a well understood return value by most people.
On the other hand, I totally understand if you don't want to return a DISP_xxxx error code from non-IDISPATCH based interfaces. There is a strong argument for saying that DISP_xxxx error codes are intened for VARIANT parameters only, and a method that takes an IUnknown* should not be returning error them.
I'm on the fence on this one.