How stringent should I be with Code Analysis compliance in Visual Studio? - visual-studio

After playing with Code Analysis for a small project I am working on, I am wondering just how severe I should be when resolving code to be analytically compliant.
I know I can suppress warnings for this, but to me, suppressing a warning to some extent is a Cop-out (no pun intended..."FXCop").
Example warning:
Do not raise exceptions in unexpected
locations 'CustomObject.Equals(object)' creates an exception of type
'ArgumentException'. Exceptions should not be raised in this type of
method. If this exception instance might be raised, change this
method's logic so it no longer raises an exception.
Reason for throwing this...
CustomObject.Equals(object) might try and compare CustomObject to FooBarObject...which aren't even of the same type, so in this instance, should I throw an exception, or just return false?
In general, should I be really anal (for want of a better word) in making my code absolutely compliant, or will I come across situations where warning suppression will become necessary?

FxCop warnings are just warnings, they don't flag invalid code. That's the job of the compiler. The rules FxCop uses were collected from years of experience writing .NET code. They represent "best practices" and in general are there to remind you of unintended consequences and the more obscure parts of .NET programming, like CAS.
Always refer back to the documentation to see why the rule exists. For CA1065 you'll see:
An Equals method should return true or false instead of throwing an exception. For example, if Equals is passed two mismatched types it should just return false instead of throwing an ArgumentException.
Which exactly matches your usage, you'll have no trouble adopting the advice. Unfortunately it is a bit short on the exact reason the rule was created. Which really doesn't go beyond the "don't throw in unexpected places" guidance. The unintended consequence here is that another programmer that uses your class won't realize that a try/catch would be needed if he doesn't want the code to fail. Feel free to put a Debug.Assert() in your Equals method. There are plenty of cases where you'll want to ignore the advice, CA2000 is particularly prone to false warnings for example. Apply the [SuppressMessage] attribute if necessary to not have to look at it again.

Related

Guard clause in Ruby not always that convenient

I am a bit mitigated concerning the use of Guard Clauses in Ruby.
The style that is suggested by Rubocop is to use the "do if condition_fails"
Often I want to generate useful error messages with the guard clause, which results in long lines if I want to keep use the aforementionned style, and the "if" will often be pushed off the screen (I don't like line wrapping).
The problem is that as a developer, I don't really care about the error message but rather the code condition itself (which is sometimes more explicit than the error message).
Example of a Guard Clause which hides visibility for the dev
def crtical_function(params)
fail Exception.new("Useful message for the user, but not often useful message for the dev, and as you can see this line is veeeeeeerrrrrry long and just annoying because the dev is rather looking for the condition itself which is often more einteresting than an error message") if not_enough_params
end
Without the Guard clause style, you understand right away
def crtical_function(params)
if not_enough_params
fail Exception.new("Useful message for the user, but not often useful message for the dev, and as you can see this line is veeeeeeerrrrrry long and just annoying because the dev is rather looking for the condition itself which is often more einteresting than an error message")
end
end
So hum yeah the story is that I just installed rubycop and it started highlighting a lot of things including those if condition fail end code blocks suggesting to transform them to Guard clauses. I wasn't sure how to react to these.
Is there a setting to configure or maybe a workaround to preserve developer visibility while getting rid of complaints of not writing a guard clause ? What are your suggestions to compromise visibility/guidelines compliance ?
This is very subjective. First of all, in both cases you are breaking another de-facto convention that wants a line of code to not exceed 80 chars.
All these conventions are inheritance from the days when monitors were reasonably small, hence having long lines was uncomfortable and could possibly hide important statements as you noticed.
Personally, I use the one-line style only when performing pre-validation, in general at the beginning of the method or when I have several short conditions in sequence.
Regardless the style you use, you may also want to consider extracting the message in a variable so that the final code is more readable.
def critical_function(params)
message = "Useful message for the user, but not often useful message for the dev, and as you can see this line is veeeeeeerrrrrry long and just annoying because the dev is rather looking for the condition itself which is often more einteresting than an error message"
fail Exception.new(message) if not_enough_params
end
def critical_function(params)
message = "Useful message for the user, but not often useful message for the dev, and as you can see this line is veeeeeeerrrrrry long and just annoying because the dev is rather looking for the condition itself which is often more einteresting than an error message"
if not_enough_params
fail Exception.new(message)
end
end
Extracting the message also allows you to store it in a constant and/or freeze it, and/or perform other interpreter optimizations.
Furthermore, you can also consider to wrap the string.
Finally, speaking of conventions, I'd be more worried about following the naming conventions for the methods, rather than forcing to one style for the if-statement.
Ruby methods are underscore_case, not camelCase:
critical_function
and not
criticalFunction

Go language warnings and errors

It seems that GO language does not have warnings in it. I've observed
few instances.
1. "declared and not used"(if variable is declared and not used
anywhere it gives an error and does not compile the program)
2. "imported and not used"(similarly if package is imported and not
used anywhere it gives an error and does not compile the program)
Can somebody help. If they have any pointers.
Go is trying to prevent this situation:
The boy is smoking and leaving smoke rings into the air. The girl gets
irritated with the smoke and says to her lover: "Can't you see the
warning written on the cigarettes packet, smoking is injurious to
health!"
The boy replies back: "Darling, I am a programmer. We don't worry
about warnings, we only worry about errors."
Basically, Go just wont let you get away with unused variables and unused imports and other stuff that is normally a warning on other languages. It helps put you in a good habit.
The Go Programming Language
FAQ
Can I stop these complaints about my unused variable/import?
The presence of an unused variable may indicate a bug, while unused
imports just slow down compilation. Accumulate enough unused imports
in your code tree and things can get very slow. For these reasons, Go
allows neither.
When developing code, it's common to create these situations
temporarily and it can be annoying to have to edit them out before the
program will compile.
Some have asked for a compiler option to turn those checks off or at
least reduce them to warnings. Such an option has not been added,
though, because compiler options should not affect the semantics of
the language and because the Go compiler does not report warnings,
only errors that prevent compilation.
There are two reasons for having no warnings. First, if it's worth
complaining about, it's worth fixing in the code. (And if it's not
worth fixing, it's not worth mentioning.) Second, having the compiler
generate warnings encourages the implementation to warn about weak
cases that can make compilation noisy, masking real errors that should
be fixed.
It's easy to address the situation, though. Use the blank identifier
to let unused things persist while you're developing.
import "unused"
// This declaration marks the import as used by referencing an
// item from the package.
var _ = unused.Item // TODO: Delete before committing!
func main() {
debugData := debug.Profile()
_ = debugData // Used only during debugging.
....
}
One solution for unused imports is to use goimports, which is a fork of gofmt. It automatically adds missing imports and removes unused ones (in addition to formatting your code).
http://godoc.org/code.google.com/p/go.tools/cmd/goimports
I've configured my editor to automatically run goimports whenever I save my code. I can't imagine writing go code without it now.
From what I just read, (wikipedia)
"Go's syntax includes changes from C aimed at keeping code concise and readable."
The word "concise" is very important to the compiler. I have found out
that the syntax enforced by the compiler is no longer "\n" or whitespace
agnostic. And there are no "warning" type errors.
There are good things about Go. There are some not so good things. The
attitude of no warnings is a bit extreme, especially when developing or testing
a new package. It seems that partial development is not acceptable. Warnings are not acceptable. It is either the production version or the highway. This is a very dualistic point of view. I wonder if evolution would have resulted in "life", if that had been the constraints on nature.
I can only hope that things will change. Death seems to be very beneficial at times.
I have tried Go, and I am disappointed. At my age I don't think I will return.

Avoiding, in general, "undefined method 'some_method' for nil:NilClass" in Ruby

Ruby's duck-typing is great, but this is the one way that it bites me in the ass. I'll have some long running text-processing script or something running, and after several hours, some unexpected set of circumstances ends up causing the script to exit with at NoMethodError due to a variable becoming nil.
Now, once it happens, it's usually an easy fix, but it would be nicer if I could predict these better, or at least handle these types of errors more gracefully. Sorry for the vagueness of the question, but this type of error just happens too often to me and I wonder if there's a good way to avoid it.
Is there some best practice related to these kinds of "type errors" for Ruby?
Look up Design by Contract. It's useful in many programming paradigms, but it's particularly useful when you don't have a compiler to help you catch these sort of errors, of forbidding particular sorts of values for a parameter.
In essence, DbC allows you to make an assumption about a parameter. It allows you (in all but one place) to skip the mundane checks that guarantee this assumption to hold.
What about Object.nil?

How to deal with not knowing what exceptions can be raised by a library method in Ruby?

This is somewhat of a broad question, but it is one that I continue to come across when programming in Ruby. I am from a largely C and Java background, where when I use a library function or method, I look at the documentation and see what it returns on error (usually in C) or which exceptions it can throw (in Java).
In Ruby, the situation seems completely different. Just now I need to parse some JSON I receive from a server:
data = JSON.parse(response)
Naturally, the first thing I think after writing this code is, what if the input is bad? Is parse going to return nil on error, or raise some exception, and if so, which ones?
I check the documentation (http://flori.github.com/json/doc/JSON.html#M000022) and see, simply:
"Parse the JSON string source into a Ruby data structure and return it."
This is just an example of a pattern I have run into repeatedly in Ruby. Originally, I figured it was some shortcoming of the documentation of whatever library I was working with, but now I am starting to feel this is standard practice and I am in a somewhat different mindset than Ruby programmers. Is there some convention I am unaware of?
How do developers deal with this?
(And yes I did look at the code of the library method, and can get some idea of what exceptions are raised but I cannot be 100% sure and if it is not documented I feel uncomfortable relying on it.)
EDIT: After looking at the first two answers, let me continue the JSON parsing example from above.
I suspect I should not do:
begin
data = JSON.parse(response)
raise "parse error" if data.nil?
rescue Exception => e
# blahblah
end
because I can look at the code/tests and see it seems to raise a ParserError on error (returning nil seems to not be standard practice in Ruby). Would I be correct in saying the recommended practice is to do:
begin
data = JSON.parse(response)
rescue JSON::ParserError => e
# blahblah
end
...based upon what I learned about ParserError by looking through the code and tests?
(I also edited the example to clarify it is a response from a server that I am parsing.)
(And yes I did look at the code of the library method, and can get some idea of what exceptions are raised but I cannot be 100% sure and if it is not documented I feel uncomfortable relying on it.)
I suggest having a look at the tests, as they will show some of the "likely" scenarios and what might be raised. Don't forget that good tests are documentation, too.
Should you want to discard non valid JSON data:
begin
res = JSON.parse(string)
rescue JSON::ParserError => e
# string was not valid
end
I guess that if no documentation is provided, you have to rely on something like this :
begin
# code goes here
rescue
# fail reason is in $!
end
You can never be sure what exceptions can be raised, unless the library code catches all and then wraps them. Your best bet is to assume good input from your code by sanitising what goes in and then use your own higher level exception handling to catch bad input from your inputs.
Your question boils down to basically two questions: is there a convention or standard for finding possible exceptions, and also where is the documentation related to such a convention?
For the first question, the closest thing to a convention or standard is the location and existence of an exceptions.rb file. For libraries or gems where the source code is publicly available, you can typically find the types of exceptions in this file. (Ref here).
If source code is not available or easily accessed, the documentation is your next best source of information. This brings us to your second question. Unfortunately documentation does not have a consistent format concerning potential exceptions, even in standard libraries. For example, the Net::Http documentation doesn't make it obvious what exceptions are available, although if you dig through it you will find that all exceptions inherit from Net::HTTPExceptions.
As another example (again from the standard library documentation), JSON documentation shows an Exception class (which is indeed in an exceptions.rb file, albeit in the source at json/lib/json/add/exceptions.rb). The point here is that it's inconsistent; the documentation for the Exception class is not listed in a way similar to that of Net::HTTPException.
Moreover, in documentation for most methods there is no indication of the exception that may be raised. Ref for example parse, one of the most-used methods of the JSON module already mentioned: exceptions aren't mentioned at all.
The lack of standard and consistency is also found in core modules. Documentation for Math doesn't contain any reference to an exceptions.rb file. Same with File, and its parent IO.
And so on, and so on.
A web search will turn up lots of information on how to rescue exceptions, and even multiple types of exceptions (ref here, here, here, here, etc). However, none of these indicate an answer to your questions of what is the standard for finding exceptions that can be raised, and where is the documentation for this.
As a final note, it has been suggested here that if all else fails you can rescue StandardError. This is a less-than-ideal practice in many cases (ref this SO answer), although I'm assuming you already understand this based on your familiarity with Java and the way you've asked this question. And of course coming from a Java world, you'll need to remember to rescue StandardError and not Exception.

Should I make sure arguments aren't null before using them in a function?

The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean.
I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so.
Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String
If (FilePath Is Nothing) Then
Throw New ArgumentNullException("FilePath")
End If
Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider
Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read)))
Return _Hash
End Function
As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing.
I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.?
Thanks.
In general, I'd suggest it's good practice to validate all of the arguments to public functions/methods before using them, and fail early rather than after executing half of the function. In this case, you're right to throw the exception.
Depending on what your method is doing, failing early could be important. If your method was altering instance data on your class, you don't want it to alter half of the data, then encounter the null and throw an exception, as your object's data might them be in an intermediate and possibly invalid state.
If you're using an OO language then I'd suggest it's essential to validate the arguments to public methods, but less important with private and protected methods. My rationale here is that you don't know what the inputs to a public method will be - any other code could create an instance of your class and call it's public methods, and pass in unexpected/invalid data. Private methods, however, are called from inside the class, and the class should already have validated any data passing around internally.
One of my favourite techniques in C++ was to DEBUG_ASSERT on NULL pointers. This was drilled into me by senior programmers (along with const correctness) and is one of the things I was most strict on during code reviews. We never dereferenced a pointer without first asserting it wasn't null.
A debug assert is only active for debug targets (it gets stripped in release) so you don't have the extra overhead in production to test for thousands of if's. Generally it would either throw an exception or trigger a hardware breakpoint. We even had systems that would throw up a debug console with the file/line info and an option to ignore the assert (once or indefinitely for the session). That was such a great debug and QA tool (we'd get screenshots with the assert on the testers screen and information on whether the program continued if ignored).
I suggest asserting all invariants in your code including unexpected nulls. If performance of the if's becomes a concern find a way to conditionally compile and keep them active in debug targets. Like source control, this is a technique that has saved my ass more often than it has caused me grief (the most important litmus test of any development technique).
Yes, it's good practice to validate all arguments at the beginning of a method and throw appropriate exceptions like ArgumentException, ArgumentNullException, or ArgumentOutOfRangeException.
If the method is private such that only you the programmer could pass invalid arguments, then you may choose to assert each argument is valid (Debug.Assert) instead of throw.
If NULL is an inacceptable input, throw an exception. By yourself, like you did in your sample, so that the message is helpful.
Another method of handling NULL inputs is just to respont with a NULL in turn. Depends on the type of function -- in the example above I would keep the exception.
If its for an externally facing API then I would say you want to check every parameter as the input cannot be trusted.
However, if it is only going to be used internally then the input should be able to be trusted and you can save yourself a bunch of code that's not adding value to the software.
You should check all arguments against the set of assumptions that you make in that function about their values.
As in your example, if a null argument to your function doesn't make any sense and you're assuming that anyone using your function will know this then being passed a null argument shows some sort of error and some sort of action taken (eg. throwing an exception). And if you use asserts (as James Fassett got in and said before me ;-) ) they cost you nothing in a release version. (they cost you almost nothing in a debug version either)
The same thing applies to any other assumption.
And it's going to be easier to trace the error if you generate it than if you leave it to some standard library routine to throw the exception. You will be able to provide much more useful contextual information.
It's outside the bounds of this question, but you do need to expose the assumptions that your function makes - for example, through the comment header to your function.
According to The Pragmatic Programmer by Andrew Hunt and David Thomas, it is the responsibility of the caller to make sure it gives valid input. So, you must now choose whether you consider a null input to be valid. Unless it makes specific sense to consider null to be a valid input (e.g. it is probably a good idea to consider null to be a legal input if you're testing for equality), I would consider it invalid. That way your program, when it hits incorrect input, will fail sooner. If your program is going to encounter an error condition, you want it to happen as soon as possible. In the event your function does inadvertently get passed a null, you should consider it to be a bug, and react accordingly (i.e. instead of throwing an exception, you should consider making use of an assertion that kills the program, until you are releasing the program).
Classic design by contract: If input is right, output will be right. If input is wrong, there is a bug. (if input is right but output is wrong, there is a bug. That's a gimme.)
I'll add a couple of elaborations (in bold) to the excellent design by contract advice offerred by Brian earlier...
The priniples of "design by contract" require that you define what is acceptable for the caller to pass in (the valid domain of input values) and then, for any valid input, what the method/provider will do.
For an internal method, you can define NULLs as outside the domain of valid input parameters. In this case, you would immediately assert that the input parameter value is NOT NULL. The key insight in this contract specification is that any call passing in a NULL value IS A CALLER'S BUG and the error thrown by the assert statement is the proper behavior.
Now, while very well defined and parsimonius, if you're exposing the method to external/public callers, you should ask yourself, is that the contract I/we really want?
Probably not. In a public interface, you'd probably accept the NULL (as technically in the domain of inputs that the method accepts), but then decline to process gracefully w/ a return message. (More work to meet the naturally more complex customer-facing requirement.)
In either case, what you're after is a protocol that handles all of the cases from both the perspective of the caller and the provider, not lots of scattershot tests that can make it difficult to assess the completeness or lack of completeness of the contractual condition coverage.
Most of the time, letting it just throw the exception is pretty reasonable as long as you are sure the exception won't be ignored.
If you can add something to it, however, it doesn't hurt to wrap the exception with one that is more accurate and rethrow it. Decoding "NullPointerException" is going to take a bit longer than "IllegalArgumentException("FilePath MUST be supplied")" (Or whatever).
Lately I've been working on a platform where you have to run an obfuscator before you test. Every stack trace looks like monkeys typing random crap, so I got in the habit of checking my arguments all the time.
I'd love to see a "nullable" or "nonull" modifier on variables and arguments so the compiler can check for you.
If you're writing a public API, do your caller the favor of helping them find their bugs quickly, and check for valid inputs.
If you're writing an API where the caller might untrusted (or the caller of the caller), checked for valid inputs, because it's good security.
If your APIs are only reachable by trusted callers, like "internal" in C#, then don't feel like you have to write all that extra code. It won't be useful to anyone.

Resources