Jasmine: Why toBeUndefined and not.toBeDefined? - jasmine

I'm just learning the Jasmine library, and I've noticed that Jasmine has a very limited number of built-in assertions. I've also noticed that, despite having such a limited number, two of its assertions appear to be redundant: toBeDefined/toBeUndefined.
In other words, both of these would seem to check for the same exact thing:
expect(1).toBeDefined();
expect(undefined).not.toBeUndefined();
Is there some reason for this, like a case where toBeDefined isn't the same as toBeUndefined? Or is this just the one assertion in Jasmine that has two perfectly equal ways of being invoked?

One might assume the same for toBeTruthy and toBeFalsy, or toBeLessThan and toBeGreaterThan (although I guess the missing assert from the last two is toEqual). In the end it comes down to readability and user preference.
To give you a more complete answer, it might be useful to take a look at the code that is invoked for these functions. The code that is executed goes through separate paths (so toBeUndefined is not simply !toBeDefined). The only real answer that makes sense is readability (or giving in to annoying feature requests). https://github.com/jasmine/jasmine/tree/main/src/core/matchers

Related

Can I ensure all tests contain an assertion in test/unit?

With test/unit, and minitest, is it possible to fail any test that doesn't contain an assertion, or would monkey-patching be required (for example, checking if the assertion count increased after each test was executed)?
Background: I shouldn't write unit tests without assertions - at a minimum, I should use assert_nothing_raised if I'm smoke testing to indicate that I'm smoke testing.
Usually I write tests that fail first, but I'm writing some regression tests. Alternatively, I could supply an incorrect expected value to see if the test is comparing the expected and actual value.
To ensure unit tests actually verify anything a technique called Mutation testing is used.
For Ruby, you can take a look at Mutant.
As PK's link points out too, the presence of assertions in itself doesn't mean the unit test is meaningful and correct. I believe there is no automatic replacement for careful thinking and awareness.
Ensuring the tests fail first is a good practice, which should be made into a habit.
Apart from the things you mention, I often set wrong values in asserts in new tests, to check that the test really runs and fails. (Then I fix it of course :-) This is less obtrusive than editing the production code.
I don't really think that forcing the test to fail without an assert is really helpful. Having an assert in a test is not a goal in itself - the goal is to write useful tests.
The missing assert is just an indication that the test may not be useful. The interesting question is: Will the test fail if something breaks?. If it doesn't, it's obviously useless.
If all you're testing for is that the code doesn't crash, then assert_nothing_raised around it is just a kind of comment. But testing for "no explosions" probably indicates a weak test in itself. In most cases, it doesn't give you any useful information about your code (because "no crash != correct"), so why did you write the test in the first place? Plus I rather prefer a method that explodes properly to one that just returns a wrong result.
I found the best regression test come from the field: Bang your app (or have your tester do it), and for each problem you find write a test that fails. Fix it, and have the test pass.
Otherwise I'd test the behavior, not the absence of crashes. In the case that I have "empty" tests (meaning that I didn't write the test code yet), I usually put a #flunk inside to remind me.

Should a Unit-test replicate functionality or Test output?

I've run into this dilemma several times. Should my unit-tests duplicate the functionality of the method they are testing to verify its integrity? OR Should unit tests strive to test the method with numerous manually created instances of inputs and expected outputs?
I'm mainly asking the question for situations where the method you are testing is reasonably simple and its proper operation can be verified by glancing at the code for a minute.
Simplified example (in ruby):
def concat_strings(str1, str2)
return str1 + " AND " + str2
end
Simplified functionality-replicating test for the above method:
def test_concat_strings
10.times do
str1 = random_string_generator
str2 = random_string_generator
assert_equal (str1 + " AND " + str2), concat_strings(str1, str2)
end
end
I understand that most times the method you are testing won't be simple enough to justify doing it this way. But my question remains; is this a valid methodology in some circumstances (why or why not)?
Testing the functionality by using the same implementation, doesn't test anything. If one has a bug in it, the other will as well.
But testing by comparing with an alternative implementation is a valid approach. For example, you might test a iterative (fast) method of calculating fibonacci numbers by comparing it with a trivial recursive, yet slow implementation of the same method.
A variation of this is using an implementation, that only works for special cases. Of course in that case you can use it only for such special cases.
When choosing input values, using random values most of the time isn't very effective. I'd prefer carefully chosen values anytime. In the example you gave, null values and extremely long values which won't fit into a String when concatenated come to mind.
If you use random values, make sure, you have a way to recreate the exact run, with the same random values, for example by logging the seed value, and having a way to set that value at start time.
It's a controversial stance, but I believe that unit testing using Derived Values is far superior to using arbitrary hard-coded input and output.
The issue is that as an algorithm becomes even slightly complex, the relationship between input and output becomes obscure if represented by hard-coded values. The unit test ends up being a postulate. It may work technically, but hurts test maintenability because it leads to Obscure Tests.
Using Derived Values to test against the result establishes a much clearer relationship between test input and expected output.
The argument that this doesn't test anything is simply not true, because any test case will exercise only a part of a path through the SUT, so no single test case will reproduce the entire algorithm being tested, but the combination of tests will do so.
An additional benefit is that you can use fewer unit tests to cover the desired functionality, and even make them more communicative at the same time. The end result is terser and more maintainable unit tests.
In unit testing you should definitely manually come up with test cases (so input, output and what side-effects you are expecting - these will be expectations on your mock objects). You come up with these test cases in a way so that they cover all functionality of your class (e.g. all methods are covered, all branches of all if statements, etc.). Think about it more along the lines of creating documentation of your class by showing all possible usages.
Reimplementing the class is not a good idea, because not only you get obvious code/functionality duplication, but also it is likely that you will introduce the same bugs in this new implementation.
to test the functionality of a method i'd use input and output pairs wherever possible. otherwise you might be copy&pasting the functionality as well as the errors in its implementation. what are you testing then? you would be testing if the functionality (including all of its errors) hasn't changed over time. but you wouldn't be testing the correctness of the implementation.
testing if the functionality hasn't changed over time might (temporarily) be useful during refactoring. but how often do you refactor such small methods?
also unit tests can be seen as documentation and as specification of a method's inputs and expected outputs. both should be as simple as possible so others can easily read and comprehend it. as soon as you introduce additional code/logic into a test it becomes harder to read.
your test actually looks like a fuzz test. fuzz tests can be very useful, but in unit tests randomness should be avoided due to reproducibility.
A Unit-Test should exercise your code, not something as part of the language you are using.
If the code's logic is to concatenate strings in a special way, you should be testing for that - otherwise you need to rely on your language/framework.
Finally, you should create your unit tests to fail first "with meaning". In other words, random values shouldn't be used (unless you're testing your random number generator isn't returning the same set of random values!)
Yes. It bothers me too.. although I'd say that it is more prevalent with non-trivial computations. In order to avoid updating the test when the code changes, some programmers write a IsX=X test, which always succeeds irrespective of the SUT
About duplicating functionality
You don't have to. Your test can state what the expected output is, not how you derived it.
Although in some non-trivial cases, it may make your test more readable as to how you derived the expected value - test as a spec. You shouldn't refactor away this duplication
def doubler(x); x * 2; end
def test_doubler()
input, expected = 10, doubler(10)
assert_equal expected, doubler(10)
end
Now if I change doubler(x) to be a tripler, the above test won't fail.
def doubler(x); x * 3; end
However this one would:
def test_doubler()
assert_equal(20, doubler(10))
end
randomness in unit tests - Don't.
Instead of random datasets, choose static representative data points for testing and use a xUnit RowTest/TestCase to run the test with diff data inputs. If n input-sets are identical for the unit, choose 1.
The test in the OP could be used as a exploratory test/ or to determine additional representative input-sets. Unit tests need to be repeatable (See q#61400) - Using random values defeats this objective.
Never use random data for input. If your test reports a failure, how are you ever going to be able to duplicate it? And don't use the same function to generate the expected result. If you have a bug in your method you're likely to put the same bug in your test. Compute the expected results by some other method.
Hard-coded values are perfectly fine, and make sure inputs are picked to represent all of the normal and edge cases. At the very least test the expected inputs as well as inputs in the wrong format or wrong size (eg: null values).
It's really quite simple -- a unit test must test whether the function works or not. That means you need to give a range of known inputs that have known outputs and test against that. There is no universal right way to do that. However, using the same algorithm for the method and the verification proves nothing but that you're adept at copy/paste.

Is checking Perl function arguments worth it?

There's a lot of buzz about MooseX::Method::Signatures and even before that, modules such as Params::Validate that are designed to type check every argument to methods or functions. I'm considering using the former for all my future Perl code, both personal and at my place of work. But I'm not sure if it's worth the effort.
I'm thinking of all the Perl code I've seen (and written) before that performs no such checking. I very rarely see a module do this:
my ($a, $b) = #_;
defined $a or croak '$a must be defined!';
!ref $a or croak '$a must be a scalar!";
...
#_ == 2 or croak "Too many arguments!";
Perhaps because it's simply too much work without some kind of helper module, but perhaps because in practice we don't send excess arguments to functions, and we don't send arrayrefs to methods that expect scalars - or if we do, we have use warnings; and we quickly hear about it - a duck typing approach.
So is Perl type checking worth the performance hit, or are its strengths predominantly shown in compiled, strongly typed languages such as C or Java?
I'm interested in answers from anyone who has experience writing Perl that uses these modules and has seen benefits (or not) from their use; if your company/project has any policies relating to type checking; and any problems with type checking and performance.
UPDATE: I read an interesting article on the subject recently, called Strong Testing vs. Strong Typing. Ignoring the slight Python bias, it essentially states that type checking can be suffocating in some instances, and even if your program passes the type checks, it's no guarantee of correctness - proper tests are the only way to be sure.
If it's important for you to check that an argument is exactly what you need, it's worth it. Performance only matters when you already have correct functioning. It doesn't matter how fast you can get a wrong answer or a core dump. :)
Now, that sounds like a stupid thing to say, but consider some cases where it isn't. Do I really care what's in #_ here?
sub looks_like_a_number { $_[0] !~ /\D/ }
sub is_a_dog { eval { $_[0]->DOES( 'Dog' ) } }
In those two examples, if the argument isn't what you expect, you are still going to get the right answer because the invalid arguments won't pass the tests. Some people see that as ugly, and I can see their point, but I also think the alternative is ugly. Who wins?
However, there are going to be times that you need guard conditions because your situation isn't so simple. The next thing you have to pass your data to might expect them to be within certain ranges or of certain types and don't fail elegantly.
When I think about guard conditions, I think through what could happen if the inputs are bad and how much I care about the failure. I have to judge that by the demands of each situation. I know that sucks as an answer, but I tend to like it better than a bondage-and-discipline approach where you have to go through all the mess even when it doesn't matter.
I dread Params::Validate because its code is often longer than my subroutine. The Moose stuff is very attractive, but you have to realize that it's a way for you to declare what you want and you still get what you could build by hand (you just don't have to see it or do it). The biggest thing I hate about Perl is the lack of optional method signatures, and that's one of the most attractive features in Perl 6 as well as Moose.
I basically concur with brian. How much you need to worry about your method's inputs depends heavily on how much you are concerned that a) someone will input bad data, and b) bad data will corrupt the purpose of the method. I would also add that there is a difference between external and internal methods. You need to be more diligent about public methods because you're making a promise to consumers of your class; conversely you can be less diligent about internal methods as you have greater (theoretical) control over the code that accesses it, and have only yourself to blame if things go wrong.
MooseX::Method::Signatures is an elegant solution to adding a simple declarative way to explain the parameters of a method. Method::Signatures::Simple and Params::Validate are nice but lack one of the features I find most appealing about Moose: the Type system. I have used MooseX::Declare and by extension MooseX::Method::Signatures for several projects and I find that the bar to writing the extra checks is so minimal it's almost seductive.
Yes its worth it - defensive programming is one of those things that are always worth it.
The counterargument I've seen presented to this is that checking parameters on every single function call is redundant and a waste of CPU time. This argument's supporters favor a model in which all incoming data is rigorously checked when it first enters the system, but internal methods have no parameter checks because they should only be called by code which will pass them data which has already passed the checks at the system's border, so it is assumed to still be valid.
In theory, I really like the sound of that, but I can also see how easily it can fall like a house of cards if someone uses the system (or the system needs to grow to allow use) in a way that was unforeseen when the initial validation border is established. All it takes is one external call to an internal function and all bets are off.
In practice, I'm using Moose at the moment and Moose doesn't really give you the option to bypass validation at the attribute level, plus MooseX::Declare handles and validates method parameters with less fuss than unrolling #_ by hand, so it's pretty much a moot point.
I want to mention two points here.
The first are the tests, the second the performance question.
1) Tests
You mentioned that tests can do a lot and that tests are the only way
to be sure that your code is correct. In general i would say this is
absolutly correct. But tests itself only solves one problem.
If you write a module you have two problems or lets say two different
people that uses your module.
You as a developer and a user that uses your module. Tests helps with the
first that your module is correct and do the right thing, but it didn't
help the user that just uses your module.
For the later, i have one example. i had written a module using Moose
and some other stuff, my code ended always in a Segmentation fault.
Then i began to debug my code and search for the problem. I spend around
4 hours of time to find the error. In the end the problem was that i have
used Moose with the Array Trait. I used the "map" function and i didn't
provide a subroutine function, just a string or something else.
Sure this was an absolutly stupid error of mine, but i spend a long time to
debug it. In the end just a checking of the input that the argument is
a subref would cost the developer 10 seconds of time, and would cost me
and propably other a lot of more time.
I also know of other examples. I had written a REST Client to an interface
completly OOP with Moose. In the end you always got back Objects, you
can change the attributes but sure it didn't call the REST API for
every change you did. Instead you change your values and in the end you
call a update() method that transfers the data, and change the values.
Now i had a user that then wrote:
$obj->update({ foo => 'bar' })
Sure i got an error back, that update() does not work. But sure it didn't
work, because the update() method didn't accept a hashref. It only does
a synchronisation of the actual state of the object with the online
service. The correct code would be.
$obj->foo('bar');
$obj->update();
The first thing works because i never did a checking of the arguments. And i don't throw an error if someone gives more arguments then i expect. The method just starts normal like.
sub update {
my ( $self ) = #_;
...
}
Sure all my tests absolutely works 100% fine. But handling these errors that
are not errors cost me time too. And it costs the user propably a lot
of more time.
So in the end. Yes, tests are the only correct way to ensure that your code
works correct. But that doesn't mean that type checking is meaningless.
Type checking is there to help all your non-developers (on your module)
to use your module correctly. And saves you and others time finding
dump errors.
2) Performance
The short: You don't care for performance until you care.
That means until your module works to slow, Performance is always fast
enough and you don't need to care for this. If your module really works
to slow you need further investigations. But for these investigions
you should use a profiler like Devel::NYTProf to look what is slow.
And i would say. In 99% slowliness is not because you do type
checking, it is more your algorithm. You do a lot of computation, calling
functions to often etc. Often it helps if you do completly other solutions
use another better algorithm, do caching or something else, and the
performance hit is not your type checking. But even if the checking is the
performance hit. Then just remove it where it matters.
There is no reason to leave the type checking where performance don't
matters. Do you think type checking does matter in a case like above?
Where i have written a REST Client? 99% of performance issues here are
the amount of request that goes to the webservice or the time for such an
request. Don't using type checking or MooseX::Declare etc. would propably
speed up absolutly nothing.
And even if you see performance disadvantages. Sometimes it is acceptable.
Because the speed doesn't matter or sometimes something gives you a greater
value. DBIx::Class is slower then pure SQL with DBI, but DBIx::Class
gives you a lot for these.
Params::Validate works great,but of course checking args slows things down. Tests are mandatory(at least in the code I write).
Yes it's absolutely worth it, because it will help during development, maintenance, debugging, etc.
If a developer accidentally sends the wrong parameters to a method, a useful error message will be generated, instead of the error being propagated down to somewhere else.
I'm using Moose extensively for a fairly large OO project I'm working on. Moose's strict type checking has saved my bacon on a few occassions. Most importantly it has helped avoid situations where "undef" values are incorrectly being passed to the method. In just those instances alone it saved me hours of debugging time..
The performance hit is definitely there, but it can be managed. 2 hours of using NYTProf helped me find a few Moose Attributes that I was grinding too hard and I just refactored my code and got 4x performance improvement.
Use type checking. Defensive coding is worth it.
Patrick.
Sometimes. I generally do it whenever I'm passing options via hash or hashref. In these cases it's very easy to misremember or misspell an option name, and checking with Params::Check can save a lot of troubleshooting time.
For example:
sub revise {
my ($file, $options) = #_;
my $tmpl = {
test_mode => { allow => [0,1], 'default' => 0 },
verbosity => { allow => qw/^\d+$/, 'default' => 1 },
force_update => { allow => [0,1], 'default' => 0 },
required_fields => { 'default' => [] },
create_backup => { allow => [0,1], 'default' => 1 },
};
my $args = check($tmpl, $options, 1)
or croak "Could not parse arguments: " . Params::Check::last_error();
...
}
Prior to adding these checks, I'd forget whether the names used underscores or hyphens, pass require_backup instead of create_backup, etc. And this is for code I wrote myself--if other people are going to use it, you should definitely do some sort of idiot-proofing. Params::Check makes it fairly easy to do type checking, allowed value checking, default values, required options, storing option values to other variables and more.

What is your personal approach/take on commenting?

Duplicate
What are your hard rules about commenting?
A Developer I work with had some things to say about commenting that were interesting to me (see below). What is your personal approach/take on commenting?
"I don't add comments to code unless
its a simple heading or there's a
platform-bug or a necessary
work-around that isn't obvious. Code
can change and comments may become
misleading. Code should be
self-documenting in its use of
descriptive names and its logical
organization - and its solutions
should be the cleanest/simplest way
to perform a given task. If a
programmer can't tell what a program
does by only reading the code, then
he's not ready to alter it.
Commenting tends to be a crutch for
writing something complex or
non-obvious - my goal is to always
write clean and simple code."
"I think there a few camps when it
comes to commenting, the
enterprisey-type who think they're
writing an API and some grand
code-library that will be used for
generations to come, the
craftsman-like programmer that thinks
code says what it does clearer than a
comment could, and novices that write
verbose/unclear code so as to need to
leave notes to themselves as to why
they did something."
There's a tragic flaw with the "self-documenting code" theory. Yes, reading the code will tell you exactly what it is doing. However, the code is incapable of telling you what it's supposed to be doing.
I think it's safe to say that all bugs are caused when code is not doing what it's supposed to be doing :). So if we add some key comments to provide maintainers with enough information to know what a piece of code is supposed to be doing, then we have given them the ability to fix a whole lot of bugs.
That leaves us with the question of how many comments to put in. If you put in too many comments, things become tedious to maintain and the comments will inevitably be out of date with the code. If you put in too few, then they're not particularly useful.
I've found regular comments to be most useful in the following places:
1) A brief description at the top of a .h or .cpp file for a class explaining the purpose of the class. This helps give maintainers a quick overview without having to sift through all of the code.
2) A comment block before the implementation of a non-trivial function explaining the purpose of it and detailing its expected inputs, potential outputs, and any oddities to expect when calling the function. This saves future maintainers from having to decipher entire functions to figure these things out.
Other than that, I tend to comment anything that might appear confusing or odd to someone. For example: "This array is 1 based instead of 0 based because of blah blah".
Well written, well placed comments are invaluable. Bad comments are often worse than no comments. To me, lack of any comments at all indicates laziness and/or arrogance on the part of the author of the code. No matter how obvious it is to you what the code is doing or how fantastic your code is, it's a challenging task to come into a body of code cold and figure out what the heck is going on. Well done comments can make a world of difference getting someone up to speed on existing code.
I've always liked Refactoring's take on commenting:
The reason we mention comments here is that comments often are used as a deodorant. It's surprising how often you look at thickly commented code and notice that the comments are there because the code is bad.
Comments lead us to bad code that has all the rotten whiffs we've discussed in the rest of this chapter. Our first action is to remove the bad smells by refactoring. When we're finished, we often find that the comments are superfluous.
As controversial as that is, it's rings true for the code I've read. To be fair, Fowler isn't saying to never comment, but to think about the state of your code before you do.
You need documentation (in some form; not always comments) for a local understanding of the code. Code by itself tells you what it does, if you read all of it and can keep it all in mind. (More on this below.) Comments are best for informal or semiformal documentation.
Many people say comments are a code smell, replaceable by refactoring, better naming, and tests. While this is true of bad comments (which are legion), it's easy to jump to concluding it's always so, and hallelujah, no more comments. This puts all the burden of local documentation -- too much of it, I think -- on naming and tests.
Document the contract of each function and, for each type of object, what it represents and any constraints on a valid representation (technically, the abstraction function and representation invariant). Use executable, testable documentation where practical (doctests, unit tests, assertions), but also write short comments giving the gist where helpful. (Where tests take the form of examples, they're incomplete; where they're complete, precise contracts, they can be as much work to grok as the code itself.) Write top-level comments for each module and each project; these can explain conventions that keep all your other comments (and code) short. (This supports naming-as-documentation: with conventions established, and a place we can expect to find subtleties noted, we can be confident more often that the names tell all we need to know.) Longer, stylized, irritatingly redundant Javadocs have their uses, but helped generate the backlash.
(For instance, this:
Perform an n-fold frobulation.
#param n the number of times to frobulate
#param x the x-coordinate of the center of frobulation
#param y the y-coordinate of the center of frobulation
#param z the z-coordinate of the center of frobulation
could be like "Frobulate n times around the center (x,y,z)." Comments don't have to be a chore to read and write.)
I don't always do as I say here; it depends on how much I value the code and who I expect to read it. But learning how to write this way made me a better programmer even when cutting corners.
Back on the claim that we document for the sake of local understanding: what does this function do?
def is_even(n): return is_odd(n-1)
Tests if an integer is even? If is_odd() tests if an integer is odd, then yes, that works. Suppose we had this:
def is_odd(n): return is_even(n-1)
The same reasoning says this is_odd() tests if an integer is odd. Put them together, of course, and neither works, even though each works if the other does. Change it a bit and we'd have code that does work, but only for natural numbers, while still locally looking like it works for integers. In microcosm that's what understanding a codebase is like: tracing dependencies around in circles to try to reverse-engineer assumptions the author could have explained in a line or two if they'd bothered. I hate the expense of spirit thoughtless coders have put me to this way over the past couple of decades: oh, this method looks like it has the side effect of farbuttling the warpcore... always? Well, if odd crobuncles desaturate, at least; do they? Better check all the crobuncle-handling code... which will pose its own challenges to understanding. Good documentation cuts this O(n) pointer-chasing down to O(1): e.g. knowing a function's contract and the contracts of the things it explicitly uses, the function's code should make sense with no further knowledge of the system. (Here, contracts saying is_even() and is_odd() work on natural numbers would tell us that both functions need to test for n==0.)
My only real rule is that comments should explain why code is there, not what it is doing or how it is doing it. Those things can change, and if they do the comments have to be maintained. The purpose the code exists in the first place shouldn't change.
the purpose of comments is to explain the context - the reason for the code; this, the programmer cannot know from mere code inspection. For example:
strangeSingleton.MoveLeft(1.06);
badlyNamedGroup.Ignite();
who knows what the heck this is for? but with a simple comment, all is revealed:
//when under attack, sidestep and retaliate with rocket bundles
strangeSingleton.MoveLeft(1.06);
badlyNamedGroup.Ignite();
seriously, comments are for the why, not the how, unless the how is unintuitive.
While I agree that code should be self-readable, I still see a lot of value in adding extensive comment blocks for explaining design decisions. For example "I did xyz instead of the common practice of abc because of this caveot ..." with a URL to a bug report or something.
I try to look at it as: If I'm dead and gone and someone straight out of college has to fix a bug here, what are they going to need to know?
In general I see comments used to explain poorly written code. Most code can be written in a way that would make comments redundant. Having said that I find myself leaving comments in code where the semantics aren't intuitive, such as calling into an API that has strange or unexpected behavior etc...
I also generally subscribe to the self-documenting code idea, so I think your developer friend gives good advice, and I won't repeat that, but there are definitely many situations where comments are necessary.
A lot of times I think it boils down to how close the implementation is to the types of ordinary or easy abstractions that code-readers in the future are going to be comfortable with or more generally to what degree the code tells the entire story. This will result in more or fewer comments depending on the type of programming language and project.
So, for example if you were using some kind of C-style pointer arithmetic in an unsafe C# code block, you shouldn't expect C# programmers to easily switch from C# code reading (which is probably typically more declarative or at least less about lower-level pointer manipulation) to be able to understand what your unsafe code is doing.
Another example is when you need to do some work deriving or researching an algorithm or equation or something that is not going to end up in your code but will be necessary to understand if anyone needs to modify your code significantly. You should document this somewhere and having at least a reference directly in the relevant code section will help a lot.
I don't think it matters how many or how few comments your code contains. If your code contains comments, they have to maintained, just like the rest of your code.
EDIT: That sounded a bit pompous, but I think that too many people forget that even the names of the variables, or the structures we use in the code, are all simply "tags" - they only have meaning to us, because our brains see a string of characters such as customerNumber and understand that it is a customer number. And while it's true that comments lack any "enforcement" by the compiler, they aren't so far removed. They are meant to convey meaning to another person, a human programmer that is reading the text of the program.
If the code is not clear without comments, first make the code a clearer statement of intent, then only add comments as needed.
Comments have their place, but primarily for cases where the code is unavoidably subtle or complex (inherent complexity is due to the nature of the problem being solved, not due to laziness or muddled thinking on the part of the programmer).
Requiring comments and "measuring productivity" in lines-of-code can lead to junk such as:
/*****
*
* Increase the value of variable i,
* but only up to the value of variable j.
*
*****/
if (i < j) {
++i;
} else {
i = j;
}
rather than the succinct (and clear to the appropriately-skilled programmer):
i = Math.min(j, i + 1);
YMMV
The vast majority of my commnets are at the class-level and method-level, and I like to describe the higher-level view instead of just args/return value. I'm especially careful to describe any "non-linearities" in the function (limits, corner cases, etc) that could trip up the unwary.
Typically I don't comment inside a method, except to mark "FIXME" items, or very occasionally some sort of "here be monsters" gotcha that I just can't seem to clean up, but I work very hard to avoid those. As Fowler says in Refactoring, comments tend to indicate smally code.
Comments are part of code, just like functions, variables and everything else - and if changing the related functionality the comment must also be updated (just like function calls need changing if function arguments change).
In general, when programming you should do things once in one place only.
Therefore, if what code does is explained by clear naming, no comment is needed - and this is of course always the goal - it's the cleanest and simplest way.
However, if further explanation is needed, I will add a comment, prefixed with INFO, NOTE, and similar...
An INFO: comment is for general information if someone is unfamiliar with this area.
A NOTE: comment is to alert of a potential oddity, such as a strange business rule / implementation.
If I specifically don't want people touching code, I might add a WARNING: or similar prefix.
What I don't use, and am specifically opposed to, are changelog-style comments - whether inline or at the head of the file - these comments belong in the version control software, not the sourcecode!
I prefer to use "Hansel and Gretel" type comments; little notes in the code as to why I'm doing it this way, or why some other way isn't appropriate. The next person to visit this code will probably need this info, and more often than not, that person will be me.
As a contractor I know that some people maintaining my code will be unfamiliar with the advanced features of ADO.Net I am using. Where appropriate, I add a brief comment about the intent of my code and a URL to an MSDN page that explains in more detail.
I remember learning C# and reading other people's code I was often frustrated by questions like, "which of the 9 meanings of the colon character does this one mean?" If you don't know the name of the feature, how do you look it up?! (Side note: This would be a good IDE feature: I select an operator or other token in the code, right click then shows me it's language part and feature name. C# needs this, VB less so.)
As for the "I don't comment my code because it is so clear and clean" crowd, I find sometimes they overestimate how clear their very clever code is. The thought that a complex algorithm is self-explanatory to someone other than the author is wishful thinking.
And I like #17 of 26's comment (empahsis added):
... reading the code will tell you exactly
what it is doing. However, the code is
incapable of telling you what it's
supposed to be doing.
I very very rarely comment. MY theory is if you have to comment it's because you're not doing things the best way possible. Like a "work around" is the only thing I would comment. Because they often don't make sense but there is a reason you are doing it so you need to explain.
Comments are a symptom of sub-par code IMO. I'm a firm believer in self documenting code. Most of my work can be easily translated, even by a layman, because of descriptive variable names, simple form, and accurate and many methods (IOW not having methods that do 5 different things).
Comments are part of a programmers toolbox and can be used and abused alike. It's not up to you, that other programmer, or anyone really to tell you that one tool is bad overall. There are places and times for everything, including comments.
I agree with most of what's been said here though, that code should be written so clear that it is self-descriptive and thus comments aren't needed, but sometimes that conflicts with the best/optimal implementation, although that could probably be solved with an appropriately named method.
I agree with the self-documenting code theory, if I can't tell what a peice of code is doing simply by reading it then it probably needs refactoring, however there are some exceptions to this, I'll add a comment if:
I'm doing something that you don't
normally see
There are major side effects or implementation details that aren't obvious, or won't be next year
I need to remember to implement
something although I prefer an
exception in these cases.
If I'm forced to go do something else and I'm having good ideas, or a difficult time with the code, then I'll add sufficient comments to tmporarily preserve my mental state
Most of the time I find that the best comment is the function or method name I am currently coding in. All other comments (except for the reasons your friend mentioned - I agree with them) feel superfluous.
So in this instance commenting feels like overkill:
/*
* this function adds two integers
*/
int add(int x, int y)
{
// add x to y and return it
return x + y;
}
because the code is self-describing. There is no need to comment this kind of thing as the name of the function clearly indicates what it does and the return statement is pretty clear as well. You would be surprised how clear your code becomes when you break it down into tiny functions like this.
When programming in C, I'll use multi-line comments in header files to describe the API, eg parameters and return value of functions, configuration macros etc...
In source files, I'll stick to single-line comments which explain the purpose of non-self-evident pieces of code or to sub-section a function which can't be refactored to smaller ones in a sane way. Here's an example of my style of commenting in source files.
If you ever need more than a few lines of comments to explain what a given piece of code does, you should seriously consider if what you're doing can't be done in a better way...
I write comments that describe the purpose of a function or method and the results it returns in adequate detail. I don't write many inline code comments because I believe my function and variable naming to be adequate to understand what is going on.
I develop on a lot of legacy PHP systems that are absolutely terribly written. I wish the original developer would have left some type of comments in the code to describe what was going on in those systems. If you're going to write indecipherable or bad code that someone else will read eventually, you should comment it.
Also, if I am doing something a particular way that doesn't look right at first glance, but I know it is because the code in question is a workaround for a platform or something like that, then I'll comment with a WARNING comment.
Sometimes code does exactly what it needs to do, but is kind of complicated and wouldn't be immediately obvious the first time someone else looked at it. In this case, I'll add a short inline comment describing what the code is intended to do.
I also try to give methods and classes documentation headers, which is good for intellisense and auto-generated documentation. I actually have a bad habit of leaving 90% of my methods and classes undocumented. You don't have time to document things when you're in the middle of coding and everything is changing constantly. Then when you're done you don't feel like going back and finding all the new stuff and documenting it. It's probably good to go back every month or so and just write a bunch of documentation.
Here's my view (based on several years of doctoral research):
There's a huge difference between commenting functions (sort of a black box use, like JavaDocs), and commenting actual code for someone who will read the code ("internal commenting").
Most "well written" code shouldn't require much "internal commenting" because if it performs a lot then it should be broken into enough function calls. The functionality for each of these calls is then captured in the function name and in the function comments.
Now, function comments are indeed the problem, and in some ways your friend is right, that for most code there is no economical incentive for complete specifications the way that popular APIs are documented. The important thing here is to identify what are the "directives": directives are those information pieces that directly affect clients, and require some direct action (and are often unexpected). For example, X must be invoked before Y, don't call this from outside a UI thread, be aware that this has a certain side effect, etc. These are the things that are really important to capture.
Since most people never read full function documentations, and skim what they do read, you can actually increase the chances of awareness by capturing only the directives rather than the whole description.
I comment as much as needed - then, as much as I will need it a year later.
We add comments which provide the API reference documentation for all public classes / methods / properties / etc... This is well worth the effort because XML Documentation in C# has the nice effect of providing IntelliSense to users of these public APIs. .NET 4.0's code contracts will enable us to improve further on this practice.
As a general rule, we do not document internal implementations as we write code unless we are doing something non-obvious. The theory is that while we are writing new implementations, things are changing and comments are more likely than not to be wrong when the dust settles.
When we go back in to work on an existing piece of code, we add comments when we realize that it's taking some thought to figure out what in the heck is going on. This way, we wind up with comments where they are more likely to be correct (because the code is more stable) and where they are more likely to be useful (if I'm coming back to a piece of code today, it seems more likely that I might come back to it again tomorrow).
My approach:
Comments bridge the gap between context / real world and code. Therefore, each and every single line is commented, in correct English language.
I DO reject code that doesn't observe this rule in the strictest possible sense.
Usage of well formatted XML - comments is self-evident.
Sloppy commenting means sloppy code!
Here's how I wrote code:
if (hotel.isFull()) {
print("We're fully booked");
} else {
Guest guest = promptGuest();
hotel.checkIn(guest);
}
here's a few comments that I might write for that code:
// if hotel is full, refuse checkin, otherwise
// prompt the user for the guest info, and check in the guest.
If your code reads like a prose, there is no sense in writing comments that simply repeats what the code reads since the mental processing needed for reading the code and the comments would be almost equal; and if you read the comments first, you will still need to read the code as well.
On the other hand, there are situations where it is impossible or extremely difficult to make the code looks like a prose; that's where comment could patch in.

What types of coding anti-patterns do you always refactor when you cross them?

I just refactored some code that was in a different section of the class I was working on because it was a series of nested conditional operators (?:) that was made a ton clearer by a fairly simple switch statement (C#).
When will you touch code that isn't directly what you are working on to make it more clear?
I once was refactoring and came across something like this code:
string strMyString;
try
{
strMyString = Session["MySessionVar"].ToString();
}
catch
{
strMyString = "";
}
Resharper pointed out that the .ToString() was redundant, so I took it out. Unfortunately, that ended up breaking the code. Whenever MySessionVar was null, it wasn't causing the NullReferenceException that the code relied on to bump it down to the catch block. I know, this was some sad code. But I did learn a good lesson from it. Don't rapidly go through old code relying on a tool to help you do the refactoring - think it through yourself.
I did end up refactoring it as follows:
string strMyString = Session["MySessionVar"] ?? "";
Update: Since this post is being upvoted and technically doesn't contain an answer to the question, I figured I should actually answer the question. (Ok, it was bothering me to the point that I was actually dreaming about it.)
Personally I ask myself a few questions before refactoring.
1) Is the system under source control? If so, go ahead and refactor because you can always roll back if something breaks.
2) Do unit tests exist for the functionality I am altering? If so, great! Refactor. The danger here is that the existence of unit tests don't indicate the accuracy and scope of said unit tests. Good unit tests should pick up any breaking changes.
3) Do I thoroughly understand the code I am refactoring? If there's no source control and no tests and I don't really understand the code I am changing, that's a red flag. I'd need to get more comfortable with the code before refactoring.
In case #3 I would probably spend the time to actually track all of the code that is currently using the method I am refactoring. Depending on the scope of the code this could be easy or impossible (ie. if it's a public API). If it comes down to being a public API then you really need to understand the original intent of the code from a business perspective.
I only refactor it if tests are already in place. If not, it's usually not worth my time to write tests for and refactor presumably working code.
This is a small, minor antipattern but it so irritates me that whenever I find it, I expunge it immediately. In C (or C++ or Java)
if (p)
return true;
else
return false;
becomes
return p;
In Scheme,
(if p #t #f)
becomes
p
and in ML
if p then true else false
becomes
p
I see this antipattern almost exclusively in code written by undergraduate students. I am definitely not making this up!!
Whenever I come across it and I don't think changing it will cause problems (e.g. I can understand it enough that I know what it does. e.g. the level of voodoo is low).
I only bother to change it if there is some other reason I'm modifying the code.
How far I'm willing to take it depends on how confident I am that I won't break anything and how extensive my own changes to the code are going to be.
This is a great situation to show off the benefits of unit tests.
If unit tests are in place, developers can bravely and aggressively refactor oddly written code they might come across. If it passes the unit tests and you've increased readability, then you've done your good deed for the day and can move on.
Without unit tests, simplifying complex code that's filled with voodoo presents a great risk of breaking the code and not even knowing you've introduced a new bug! So most developers will take the cautious route and move on.
For simple refactoring I try to clean up deeply nested control structures and really long functions (more than one screen worth of text). However its not a great idea to refactor code without a good reason (especially in a big team of developers). In general, unless the refactoring will make a big improvement in the code or fix an egregious sin I try to leave well enough alone.
Not refactoring per-say but just as a matter of general housekeeping I generally do this stuff when I start work on a module:
Remove stupid comments
Comments that say nothing more than the function signature already says
Comments that are pure idiocy like "just what it looks like"
Changelogs at the top of the file (we have version control for a reason)
Any API docs that are clearly out-of-sync with the code
Remove commented-out chunks of code
Add version control tags like $Id$ if they are missing
Fix whitespace issues (this can be annoying to others though because your name shows up for a lot of lines in a diff even if all you did was change whitespace)
Remove whitespace at the end of the lines
Change tabs->spaces (for that is our convention where I work)
If the refactor makes the code much easier to read, the most common for me would be duplicate code, e.g. an if/else that only differs by the first/last commands.
if($something) {
load_data($something);
} else {
load_data($something);
echo "Loaded";
do_something_else();
}
More than (arguably) three or four lines of duplicate code always makes me think about refactoring. Also, I tend to move code around a lot, extracting the code I predict to be used more frequently into a separate place - a class with its own well-defined purpose and responsibilites, or a static method of a static class (usually placed in my Utils.* namespace).
But, to answer your question, yes, there are lot of cases when making the code shorter does not necessarily mean making it well structued and readable. Using the ?? operator in C# is another example. What you also have to think about are the new features in your language of choice - e.g. LINQ can be used to do some stuff in a very elegant manner but also can make a very simple thing very unreadable and overly complex. You need to weigh these two thing very carefully, in the end it all boils down to your personal taste and, mostly, experience.
Well, this is another "it depends" answer, but I am afraid it has to be.
I almost always break >2 similar conditionals into a switch... most often with regards to enums. I will short a return instead of a long statement.
ex:
if (condition) {
//lots of code
//returns value
} else {
return null;
}
becomes:
if (!condition)
return null;
//lots of code..
//return value
breaking out early reduces extra indents, and reduces long bits of code... also as a general rule I don't like methods with more than 10-15 lines of code. I like methods to have a singular purpose, even if creating more private methods internally.
Usually I don't refactor the code if I'm just browsing it, not actively working on it.
But sometimes ReSharper points out some stuff I just can't resist to Alt+Enter. :)
I tend to refactor very long functions and methods if I understand the set of inputs and outputs to a block.
This helps readability no end.
I would only refactor the code that I come across and am not actively working on after going through the following steps:
Speak with the author of the code (not always possible) to figure out what that piece of code does. Even if it is obvious to me as to what the piece of code is doing, it always helps to understand the rationale behind why the author may have decided to do something in a certain way. Spending a couple of minutes talking about it would not only help the original author understand your point of view, it also builds trust within the team.
Know or find out what that piece of code is doing in order to test it after re-factoring (A build process with unit tests is very helpful here. It makes the whole thing quick and easy). Run the unit tests before and after the change and ensure nothing is breaking due to your changes.
Send out a heads up to the team (if working with others) and let them know of the upcoming change so nobody is surprised when the change actually occurs
Refactoring for the sake of it is one of the roots of all evil. Please don't ever do it. (Unit tests do somewhat mitigate this).
Is vast swaths of commented out code an antipattern? While not code specifically, (it's comments) I see a lot of this kind of behaviour with people who don't understand how to use source control, and who want to keep the old code around for later so they can more easily go back if a bug was introduced. Whenever I see vast swaths of code that are commented out, I almost always remove them in their entirety.
I try to refactor based on following factors:
Do I understand enough to know whats happening?
Can I easily revert back if this change breaks the code.
Will I have enough time to revert the change back if it breaks the build.
And sometimes, if I have enough time, I refactor to learn. As in, I know my change may break the code, but I dont know where and how. So I change it anyways to find out where its breaking and why. That way I learn more about the code.
My domain has been mobile software (cell phones) where most of the code resides on my PC and wont impact others. Since I also maintain the CI build system for my company I can run a complete product build (for all phones) on the refactored code to ensure it doesnt break anything else. But thats my personal luxury which you may not have.
I tend to refactor global constants and enumerations quite a bit if they can be deemed a low refactor risk. Maybe it's just be, but something like a ClientAccountStatus enum should be in or close to the ClientAccount class rather than being in a GlobalConstAndEnum class.
Deletion/updating of comments which are clearly wrong or clearly pointless.
Removing them is:
safe
version control means you can find them again
improves the quality of the code to others and yourself
It is about the only 100% risk free refactoring I know.
Note that doing this in structured comments like javadoc comments is a different matter. Changing these is not risk free, as such they are very likely to be fixed/removed but not dead certain guaranteed fixes as standard incorrect code comments would be.

Resources