Related
I want to get the last element of a lazy but finite Seq in Raku, e.g.:
my $s = lazy gather for ^10 { take $_ };
The following don't work:
say $s[* - 1];
say $s.tail;
These ones work but don't seem too idiomatic:
say (for $s<> { $_ }).tail;
say (for $s<> { $_ })[* - 1];
What is the most idiomatic way of doing this while keeping the original Seq lazy?
What you're asking about ("get[ing] the last element of a lazy but finite Seq … while keeping the original Seq lazy") isn't possible. I don't mean that it's not possible with Raku – I mean that, in principle, it's not possible for any language that defines "laziness" the way Raku does with, for example, the is-lazy method.
If particular, when a Seq is lazy in Raku, that "means that [the Seq's] values are computed on demand and stored for later use." Additionally, one of the defining features of a lazy iterable is that it cannot know its own length while remaining lazy – that's why calling .elems on a lazy iterable throws an error:
my $s = lazy gather for ^10 { take $_ };
say $s.is-lazy; # OUTPUT: «True»
$s.elems; # THROWS: «Cannot .elems a lazy list onto a Seq»
Now, at this point, you might reasonably be thinking "well, maybe Raku doesn't know how long $s is, but I can tell that it has exactly 10 elements in it." And you're not wrong – with that code, $s is indeed guaranteed to have 10 elements. This means that, if you want to get the tenth (last) element of $s, you can do so with $s[9]. And accessing $s's tenth element like that won't change the fact that $s.is-lazy.
But, importantly, you can only do so because you know something "extra" about $s, and that extra info undoes a good chunk of the reason you might want a list to be lazy in practice.
To see what I mean, consider a very similar Seq
my $s2 = lazy gather for ^10 { last if rand > .95; take $_ };
say $s2.is-lazy; # OUTPUT: «True»
Now, $s2probably has 10 elements, but it might not – the only way to know is to iterate through it and find out. In turn, this means $s2[9] does not jump to the tenth element the way $s[9] did; it iterates through $s2 just like you'd need to. And, as a result, if you run $s2[9], then $s2 will no longer be lazy (i.e., $s2.is-lazy will return False).
And this is, in effect, what you did in the code in your question:
my $s = lazy gather for ^10 { take $_ };
say $s.is-lazy; # OUTPUT: «True»
say (for $s<> { $_ }).tail; # OUTPUT: «9»
say $s.is-lazy; # OUTPUT: «False»
Because Raku cannot ever know that it has reached the tail of a lazy Seq, the only way it could tell you the .tail is to fully iterate $s. And that necessarily means that $s is no longer lazy.
Two complications
It's worth mentioning two adjacent topics that aren't actually related but that are close enough that they trip some people up.
First, nothing I've said about lazy iterables not knowing their length precludes some non-lazy iterables from knowing their length. Indeed, a decent number of Raku types do both the Iterator role and the PredictiveIterator role – and the main point of a PredictiveIterator is that it does know how many elements it can produce without needing to produce/iterate them. But PredictiveIterators cannot be lazy.
The second potentially confusing topic is closely related to the first: while no PredictiveIterator can be lazy (that is, none will ever have an .is-lazy method that returns True), some PredictiveIterators have behavior that is very similar to laziness – and, in fact, may even be colloquially referred to as "lazy".
I can't do a great job explaining this distinction because, quite honestly, I don't fully understand it myself. But I can give you an example: the .lines method on an IO::Handle. It's certainly the case that reading the lines of a huge file behaves a lot like it's dealing with a lazy iterable. most obviously, you can process each line without ever having the whole file in memory. And the docs even say that "lines are read lazily" with the .lines method.
On the other hand:
my $l = 'some-file-with-100_000-lines.txt'.IO.lines;
say $l.is-lazy; # OUTPUT: «False»
say $l.iterator ~~ PredictiveIterator; # OUTPUT: «True»
say $l.elems; # OUTPUT: «100000»
So I'm not quite sure whether it's fair to say that $l "is a lazy iterable", but if it is, it's "lazy" in a different way than $s was.
I realize that was a lot, but I hope it is helpful. If you have a more specific use case in mind for laziness (I bet it wasn't gathering the numbers from zero to nine!), I'd be happy to address that more specifically. And if anyone else can fill in some of the details with .lines and other lazy-not-lazy PredictiveIterators, I'd really appreciate it!
Drop the lazy
Lazy sequences in Raku are designed to work well as is. You don't need to emphasize they're lazy by adding an explicit lazy.
If you add an explicit lazy, Raku interprets that as a request to block operations such as .tail because they will almost certainly immediately render laziness moot, and, if called on an infinite sequence, or even just a sufficiently large one, hang or OOM the program.
So, either drop the lazy, or don't invoke operations like .tail that will be blocked if you do.
Expanded version of my original answer
As noted by #ugexe, the idiomatic solution is to drop the lazy.
Quoting my answer to the SO About Laziness:
if a gather is asked if it's lazy, it returns False.
Aiui, something like the following applies:
Some lazy sequence producers may be actually or effectively infinite. If so, calling .tail etc on them will hang the calling program. Conversely, other lazy sequences perform fine when all their values are consumed in one go. How should Raku distinguish between these two scenarios?
A decision was made in 2015 to let value producing datatypes emphasize or deemphasize their laziness via their response to an .is-lazy call.
Returning True signals that a sequence is not only lazy but wants to be known to be lazy by consuming code that calls .is-lazy. (Not so much end-user code but instead built in consuming features such as # sigilled variables handling an assignment trying to determine whether or not to assign eagerly.) Built in consuming features take a True as a signal they ought block calls like .tail. If a dev knows this is overly conservative, they can add an eager (or remove an unneeded lazy).
Conversely, a datatype, or even a particular object instance, may return False to signal that it does not want to be considered lazy. This may be because the actual behaviour of a particular datatype or instance is eager, but it might instead be that it is lazy technically, but doesn't want a consumer to block operations such as .tail because it knows they will not be harmful, or at least prefers to have that be the default presumption. If a dev knows better (because, say, it hangs the program), or at least does not want to block potentially problematic operations, they can add a lazy (or remove an unneeded eager).
I think this approach works well, but it doc and error messages mentioning "lazy" may not have caught up with the shift made in 2015. So:
If you've been confused by some doc about laziness, please search for doc issues with "lazy" in them, or "laziness", and add comments to existing issues, or file a new doc issue (perhaps linking to this SO answer).
If you've been confused by a Rakudo error message mentioning laziness, please search for Rakudo issues with "lazy" in them, and tagged [LTA] (which means "Less Than Awesome"), and add comments, or file a new Rakudo issue (with an [LTA] tag, and perhaps a link to this SO answer).
Further discussion
the docs ... say “If you want to force lazy evaluation use the lazy subroutine or method. Binding to a scalar or sigilless container will also force laziness.”
Yes. Aiui this is correct.
[which] sounds like it implies “my $x := lazy gather { ... } is the same as my $x := gather { ... }”.
No.
An explicit lazy statement prefix or method adds emphasis to laziness, and Raku interprets that to mean it ought block operations like .tail in case they hang the program.
In contrast, binding to a variable alters neither emphasis nor deemphasis of laziness, merely relaying onward whatever the bound producer datatype/instance has chosen to convey via .is-lazy.
not only in connection with gather but elsewhere as well
Yes. It's about the result of .is-lazy:
my $x = (1, { .say; $_ + 1 } ... 1000);
my $y = lazy (1, { .say; $_ + 1 } ... 1000);
both act lazily ... but $x.tail is possible while $y.tail is not.
Yes.
An explicit lazy statement prefix or method forces the answer to .is-lazy to be True. This signals to a consumer that cares about the dangers of laziness that it should become cautious (eg rejecting .tail etc.).
(Conversely, an eager statement prefix or method can be used to force the answer to .is-lazy to be False, making timid consumers accept .tail etc calls.)
I take from this that there are two kinds of laziness in Raku, and one has to be careful to see which one is being used where.
It's two kinds of what I'll call consumption guidance:
Don't-tail-me If an object returns True from an .is-lazy call then it is treated as if it might be infinite. Thus operations like .tail are blocked.
You-can-tail-me If an object returns False from an .is-lazy call then operations like .tail are accepted.
It's not so much that there's a need to be careful about which of these two kinds is in play, but if one wants to call operations like tail, then one may need to enable that by inserting an eager or removing a lazy, and one must take responsibility for the consequences:
If the program hangs due to use of .tail, well, DIHWIDT.
If you suddenly consume all of a lazy sequence and haven't cached it, well, maybe you should cache it.
Etc.
What I would say is that the error messages and/or doc may well need to be improved.
Suppose I have the following snippet of code:
bool flag = true;
auto myFunction = [](int a, int b, bool flag)
{
if (flag)
{
// do something with a and b
}
}
Later in the code, I call myFunction thousands of times in a loop, for the same value of flag.
Then, I have another loop that also calls myFunction thousands of times, but for a different value of flag.
My understanding is that, being a lambda function, it is an inline function and thus will be repeated wherever it is called.
My question is: will the compiler evaluate the if statement before "copying" the inline function, and thus not have to perform that check at every single iteration?
Disclaimers:
I know that this may fall under the category of micro-optimization, but I would like an answer nonetheless.
My example is silly; I could just put the if statements outside the loops. But this is just meant to be a representative example of a much more complicated case.
My use of lambda functions is inspired from the answer to this question.
Thanks!
My question is: will the compiler evaluate the if statement before "copying" the inline function, and thus not have to perform that check at every single iteration?
The language does not require it. An optimizing compiler might be able pull that off if it knows the value of flag at compile time. However, it's hard telling without looking at the assembly code generated by the compiler.
When debugging a function I usually use
library(debug)
mtrace(FunctionName)
FunctionName(...)
And that works quite well for me.
However, sometimes I am trying to debug a complex function that I don't know. In which case, I can find that inside that function there is another function that I would like to "go into" ("debug") - so to better understand how the entire process works.
So one way of doing it would be to do:
library(debug)
mtrace(FunctionName)
FunctionName(...)
# when finding a function I want to debug inside the function, run again:
mtrace(FunctionName.SubFunction)
The question is - is there a better/smarter way to do interactive debugging (as I have described) that I might be missing?
p.s: I am aware that there where various questions asked on the subject on SO (see here). Yet I wasn't able to come across a similar question/solution to what I asked here.
Not entirely sure about the use case, but when you encounter a problem, you can call the function traceback(). That will show the path of your function call through the stack until it hit its problem. You could, if you were inclined to work your way down from the top, call debug on each of the functions given in the list before making your function call. Then you would be walking through the entire process from the beginning.
Here's an example of how you could do this in a more systematic way, by creating a function to step through it:
walk.through <- function() {
tb <- unlist(.Traceback)
if(is.null(tb)) stop("no traceback to use for debugging")
assign("debug.fun.list", matrix(unlist(strsplit(tb, "\\(")), nrow=2)[1,], envir=.GlobalEnv)
lapply(debug.fun.list, function(x) debug(get(x)))
print(paste("Now debugging functions:", paste(debug.fun.list, collapse=",")))
}
unwalk.through <- function() {
lapply(debug.fun.list, function(x) undebug(get(as.character(x))))
print(paste("Now undebugging functions:", paste(debug.fun.list, collapse=",")))
rm(list="debug.fun.list", envir=.GlobalEnv)
}
Here's a dummy example of using it:
foo <- function(x) { print(1); bar(2) }
bar <- function(x) { x + a.variable.which.does.not.exist }
foo(2)
# now step through the functions
walk.through()
foo(2)
# undebug those functions again...
unwalk.through()
foo(2)
IMO, that doesn't seem like the most sensible thing to do. It makes more sense to simply go into the function where the problem occurs (i.e. at the lowest level) and work your way backwards.
I've already outlined the logic behind this basic routine in "favorite debugging trick".
I like options(error=recover) as detailed previously on SO. Things then stop at the point of error and one can inspect.
(I'm the author of the 'debug' package where 'mtrace' lives)
If the definition of 'SubFunction' lives outside 'MyFunction', then you can just mtrace 'SubFunction' and don't need to mtrace 'MyFunction'. And functions run faster if they're not 'mtrace'd, so it's good to mtrace only as little as you need to. (But you probably know those things already!)
If 'MyFunction' is only defined inside 'SubFunction', one trick that might help is to use a conditional breakpoint in 'MyFunction'. You'll need to 'mtrace( MyFunction)', then run it, and when the debugging window appears, find out what line 'MyFunction' is defined in. Say it's line 17. Then the following should work:
D(n)> bp( 1, F) # don't bother showing the window for MyFunction again
D(n)> bp( 18, { mtrace( SubFunction); FALSE})
D(n)> go()
It should be clear what this does (or it will be if you try it).
The only downsides are: the need to do it again whenever you change the code of 'MyFunction', and; the slowing-down that might occur through 'MyFunction' itself being mtraced.
You could also experiment with adding a 'debug.sub' argument to 'MyFunction', that defaults to FALSE. In the code of 'MyFunction', then add this line immediately after the definition of 'SubFunction':
if( debug.sub) mtrace( SubFunction)
That avoids any need to mtrace 'MyFunction' itself, but does require you to be able to change its code.
Something like this (yes, this doesn't deal with some edge cases - that's not the point):
int CountDigits(int num) {
int count = 1;
while (num >= 10) {
count++;
num /= 10;
}
return count;
}
What's your opinion about this? That is, using function arguments as local variables.
Both are placed on the stack, and pretty much identical performance wise, I'm wondering about the best-practices aspects of this.
I feel like an idiot when I add an additional and quite redundant line to that function consisting of int numCopy = num, however it does bug me.
What do you think? Should this be avoided?
As a general rule, I wouldn't use a function parameter as a local processing variable, i.e. I treat function parameters as read-only.
In my mind, intuitively understandabie code is paramount for maintainability, and modifying a function parameter to use as a local processing variable tends to run counter to that goal. I have come to expect that a parameter will have the same value in the middle and bottom of a method as it does at the top. Plus, an aptly-named local processing variable may improve understandability.
Still, as #Stewart says, this rule is more or less important depending on the length and complexity of the function. For short simple functions like the one you show, simply using the parameter itself may be easier to understand than introducing a new local variable (very subjective).
Nevertheless, if I were to write something as simple as countDigits(), I'd tend to use a remainingBalance local processing variable in lieu of modifying the num parameter as part of local processing - just seems clearer to me.
Sometimes, I will modify a local parameter at the beginning of a method to normalize the parameter:
void saveName(String name) {
name = (name != null ? name.trim() : "");
...
}
I rationalize that this is okay because:
a. it is easy to see at the top of the method,
b. the parameter maintains its the original conceptual intent, and
c. the parameter is stable for the rest of the method
Then again, half the time, I'm just as apt to use a local variable anyway, just to get a couple of extra finals in there (okay, that's a bad reason, but I like final):
void saveName(final String name) {
final String normalizedName = (name != null ? name.trim() : "");
...
}
If, 99% of the time, the code leaves function parameters unmodified (i.e. mutating parameters are unintuitive or unexpected for this code base) , then, during that other 1% of the time, dropping a quick comment about a mutating parameter at the top of a long/complex function could be a big boon to understandability:
int CountDigits(int num) {
// num is consumed
int count = 1;
while (num >= 10) {
count++;
num /= 10;
}
return count;
}
P.S. :-)
parameters vs arguments
http://en.wikipedia.org/wiki/Parameter_(computer_science)#Parameters_and_arguments
These two terms are sometimes loosely used interchangeably; in particular, "argument" is sometimes used in place of "parameter". Nevertheless, there is a difference. Properly, parameters appear in procedure definitions; arguments appear in procedure calls.
So,
int foo(int bar)
bar is a parameter.
int x = 5
int y = foo(x)
The value of x is the argument for the bar parameter.
It always feels a little funny to me when I do this, but that's not really a good reason to avoid it.
One reason you might potentially want to avoid it is for debugging purposes. Being able to tell the difference between "scratchpad" variables and the input to the function can be very useful when you're halfway through debugging.
I can't say it's something that comes up very often in my experience - and often you can find that it's worth introducing another variable just for the sake of having a different name, but if the code which is otherwise cleanest ends up changing the value of the variable, then so be it.
One situation where this can come up and be entirely reasonable is where you've got some value meaning "use the default" (typically a null reference in a language like Java or C#). In that case I think it's entirely reasonable to modify the value of the parameter to the "real" default value. This is particularly useful in C# 4 where you can have optional parameters, but the default value has to be a constant:
For example:
public static void WriteText(string file, string text, Encoding encoding = null)
{
// Null means "use the default" which we would document to be UTF-8
encoding = encoding ?? Encoding.UTF8;
// Rest of code here
}
About C and C++:
My opinion is that using the parameter as a local variable of the function is fine because it is a local variable already. Why then not use it as such?
I feel silly too when copying the parameter into a new local variable just to have a modifiable variable to work with.
But I think this is pretty much a personal opinion. Do it as you like. If you feel sill copying the parameter just because of this, it indicates your personality doesn't like it and then you shouldn't do it.
If I don't need a copy of the original value, I don't declare a new variable.
IMO I don't think mutating the parameter values is a bad practice in general,
it depends on how you're going to use it in your code.
My team coding standard recommends against this because it can get out of hand. To my mind for a function like the one you show, it doesn't hurt because everyone can see what is going on. The problem is that with time functions get longer, and they get bug fixes in them. As soon as a function is more than one screen full of code, this starts to get confusing which is why our coding standard bans it.
The compiler ought to be able to get rid of the redundant variable quite easily, so it has no efficiency impact. It is probably just between you and your code reviewer whether this is OK or not.
I would generally not change the parameter value within the function. If at some point later in the function you need to refer to the original value, you still have it. in your simple case, there is no problem, but if you add more code later, you may refer to 'num' without realizing it has been changed.
The code needs to be as self sufficient as possible. What I mean by that is you now have a dependency on what is being passed in as part of your algorithm. If another member of your team decides to change this to a pass by reference then you might have big problems.
The best practice is definitely to copy the inbound parameters if you expect them to be immutable.
I typically don't modify function parameters, unless they're pointers, in which case I might alter the value that's pointed to.
I think the best-practices of this varies by language. For example, in Perl you can localize any variable or even part of a variable to a local scope, so that changing it in that scope will not have any affect outside of it:
sub my_function
{
my ($arg1, $arg2) = #_; # get the local variables off the stack
local $arg1; # changing $arg1 here will not be visible outside this scope
$arg1++;
local $arg2->{key1}; # only the key1 portion of the hashref referenced by $arg2 is localized
$arg2->{key1}->{key2} = 'foo'; # this change is not visible outside the function
}
Occasionally I have been bitten by forgetting to localize a data structure that was passed by reference to a function, that I changed inside the function. Conversely, I have also returned a data structure as a function result that was shared among multiple systems and the caller then proceeded to change the data by mistake, affecting these other systems in a difficult-to-trace problem usually called action at a distance. The best thing to do here would be to make a clone of the data before returning it*, or make it read-only**.
* In Perl, see the function dclone() in the built-in Storable module.
** In Perl, see lock_hash() or lock_hash_ref() in the built-in Hash::Util module).
This is a minor style question, but every bit of readability you add to your code counts.
So if you've got:
if (condition) then
{
// do stuff
}
else
{
// do other stuff
}
How do you decide if it's better like that, or like this:
if (!condition) then
{
// do other stuff
{
else
{
// do stuff
}
My heuristics are:
Keep the condition positive (less
mental calculation when reading it)
Put the most common path into the
first block
I prefer to put the most common path first, and I am a strong believer in nesting reduction so I will break, continue, or return instead of elsing whenever possible. I generally prefer to test against positive conditions, or invert [and name] negative conditions as a positive.
if (condition)
return;
DoSomething();
I have found that by drastically reducing the usage of else my code is more readable and maintainable and when I do have to use else its almost always an excellent candidate for a more structured switch statement.
Two (contradictory) textbook quotes:
Put the shortest clause of an if/else
on top
--Allen Holub, "Enough Rope to Shoot Yourself in the Foot", p52
Put the normal case after the if rather than after the else
--Steve McConnell, "Code Complete, 2nd ed.", p356
I prefer the first one. The condition should be as simple as possible and it should be fairly obvious which is simpler out of condition and !condition
It depends on your flow. For many functions, I'll use preconditions:
bool MyFunc(variable) {
if (variable != something_i_want)
return false;
// a large block of code
// ...
return true;
}
If I need to do something each case, I'll use an if (positive_clause) {} else {} format.
If the code is to check for an error condition, I prefer to put that code first, and the "successful" code second; conceptually, this keeps a function call and its error-checking code together, which makes sense to me because they are related. For example:
if (!some_function_that_could_fail())
{
// Error handling code
}
else
{
// Success code
}
I agree with Oli on using a positive if clause when possible.
Just please never do this:
if (somePositiveCondition)
else {
//stuff
}
I used to see this a lot at one place I worked and used to wonder if one of the coders didn't understand how not works...
When I am looking at data validation, I try to make my conditions "white listing" - that is, I test for what I will accept:
if DataIsGood() then
DoMyNormalStuff
else
TakeEvasiveAction
Rather than the other way around, which tends to degenerate into:
if SomeErrorTest then
TakeSomeEvasiveAction
else if SomeOtherErrorCondition then
CorrectMoreStupidUserProblems
else if YetAnotherErrorThatNoOneThoughtOf then
DoMoreErrorHandling
else
DoMyNormalStuff
I know this isn't exactly what you're looking for, but ... A lot of developers use a "guard clause", that is, a negative "if" statement that breaks out of the method as soon as possible. At that point, there is no "else" really.
Example:
if (blah == false)
{
return; // perhaps with a message
}
// do rest of code here...
There are some hard-core c/c++/assembly guys out there that will tell you that you're destroying your CPU!!! (in many cases, processors favor the "true" statement and try to "prefetch" the next thing to do... so theoretically any "false" condition will flush the pipe and will go microseconds slower).
In my opinion, we are at the point where "better" (more understandable) code wins out over microseconds of CPU time.
I think that for a single variable the not operator is simple enough and naming issues start being more relevant.
Never name a variable not_X, if in need use a thesaurus and find an opposite. I've seen plenty of awful code like
if (not_dead) {
} else {
}
instead of the obvious
if (alive) {
} else {
}
Then you can sanely use (very readable, no need to invert the code blocks)
if (!alive) {
} else {
}
If we're talking about more variables I think the best rule is to simplify the condition. After a while projects tend to get conditions like:
if (dead || (!dead && sleeping)) {
} else {
}
Which translates to
if (dead || sleeping) {
} else {
}
Always pay attention to what conditions look like and how to simplify them.
Software is knowledge capture. You're encoding someone's knowledge of how to do something.
The software should fit what's "natural" for the problem. When in doubt, ask someone else and see what people actually say and do.
What about the situation where the "common" case is do nothing? What then
if( common ) {
// pass
}
else {
// great big block of exception-handling folderol
}
Or do you do this?
if( ! common ) {
// great big block of except-handling folderol
}
The "always positive" rule isn't really what you want first. You want to look at rules more like the following.
Always natural -- it should read like English (or whatever the common language in your organization is.)
Where possible, common cases first -- so they appear common.
Where possible use positive logic; negative logic can be used where it's commonly said that way or where the common case is a do-nothing.
If one of the two paths is very short (1 to 10 lines or so) and the other is much longer, I follow the Holub rule mentioned here and put the shorter piece of code in the if. That makes it easier to see the if/else flow on one screen when reviewing the code.
If that is not possible, then I structure to make the condition as simple as possible.
For me it depends on the condition, for example:
if (!PreserveData.Checked)
{ resetfields();}
I tend to talk to my self with what I want the logic to be and code it to the little voice in my head.
You can usually make the condition positive without switching around the if / else blocks.
Change
if (!widget.enabled()) {
// more common
} else {
// less common
}
to
if (widget.disabled()) {
// more common
} else {
// less common
}
Intel Pentium branch prediction pre-fetches instructions for the "if" case. If it instead follows the "else" branch: it has the flush the instruction pipeline, causing a stall.
If you care a lot about performance: put the most likely outcome in the 'if' clause.
Personally i write it as
if (expected)
{
//expected path
}
else
{
//fallback other odd case
}
If you have both true and false conditions then I'd opt for a positive conditional - This reduces confusion and in general I believe makes your code easier to read.
On the other hand, if you're using a language such as Perl, and particularly if your false condition is either an error condition or the most common condition, you can use the 'unless' structure, which executes the code block unless the condition is true (i.e. the opposite of if):
unless ($foo) {
$bar;
}
First of all, let's put aside situations when it is better to avoid using "else" in the first place (I hope everyone agrees that such situations do exist and determining such cases probably should be a separate topic).
So, let's assume that there must be an "else" clause.
I think that readability/comprehensibility imposes at least three key requirements or rules, which unfortunately often compete with each other:
The shorter is the first block (the "if" block) the easier is it to grasp the entire "if-else" construct. When the "if" block is long enough, it becomes way too easy to overlook existence of "else" block.
When the "if" and "else" paths are logically asymmetric (e.g. "normal processing" vs. "error processing"), in a standalone "if-else" construct it does not really matter much which path is first and which is second. However, when there are multiple "if-else" constructs in proximity to each other (including nesting), and when all those "if-else" constructs have asymmetry of the same kind - that's when it is very important to arrange those asymmetric paths consistently.
Again, it can be "if ... normal path ... else ... abnormal path" for all, or "if ... abnormal path ... else ... normal path" for all, but it should not be a mix of these two variants.
With all other conditions equal, putting the normal path first is probably more natural for most human beings (I think it's more about psychology than aesthetics :-).
An expression that starts with a negation usually is less readable/comprehensible than an expression that doesn't.
So, we have these three competing requirements/rules, and the real question is: which of them are more important than others. For Allen Holub the rule #1 is probably the most important one. For Steve McConnell - it is the rule #2. But I don't think that you can really choose only one of these rules as a single quideline.
I bet you've already guessed my personal priorities here (from the way I ordered the rules above :-).
My reasons are simple:
The rule #1 is unconditional and impossible to circumvent. If one of the blocks is so long that it runs off the screen - it must become the "else" block. (No, it is not a good idea to create a function/method mechanically just to decrease the number of lines in an "if" or "else" block! I am assuming that each block already has a logically justifiable minimum amount of lines.)
The rule #2 involves a lot of conditions: multiple "if-else" constructs, all having asymmetry of the same kind, etc. So it just does not apply in many cases.
Also, I often observe the following interesting phenomenon: when the rule #2 does apply and when it is used properly, it actually does not conflict with the rule #1! For example, whenever I have a bunch of "if-else" statements with "normal vs. abnormal" asymmetry, all the "abnormal" paths are shorter than "normal" ones (or vice versa). I cannot explain this phenomenon, but I think that it's just a sign of good code organization. In other words, whenever I see a situation when rules #1 and #2 are in conflict, I start looking for "code smells" and more often than not I do find some; and after refactoring - tada! no more painful choosing between rule #1 and rule #2, :-)
Finally, the rule #3 hase the smallest scope and therefore is the least critical.
Also, as mentined here by other colleagues, it is often very easy to "cheat" with this rule (for example, to write "if(disabled),,," instead of "if(!enabled)...").
I hope someone can make some sense of this opus...
As a general rule, if one is significantly larger than the other, I make the larger one the if block.
put the common path first
turn negative cheking into positive ones (!full == empty)
I always keep the most likely first.
In Perl I have an extra control structure to help with that. The inverse of if.
unless (alive) {
go_to_heaven;
} else {
say "MEDIC";
}
You should always put the most likely case first. Besides being more readable, it is faster. This also applies to switch statements.
I'm horrible when it comes to how I set up if statements. Basically, I set it up based on what exactly I'm looking for, which leads everything to be different.
if (userinput = null){
explodeViolently();
} else {
actually do stuff;
}
or perhaps something like
if (1+1=2) {
do stuff;
} else {
explodeViolently();
}
Which section of the if/else statement actually does things for me is a bad habit of mine.
I generally put the positive result (so the method) at the start so:
if(condition)
{
doSomething();
}
else
{
System.out.println("condition not true")
}
But if the condition has to be false for the method to be used, I would do this:
if(!condition)
{
doSomething();
}
else
{
System.out.println("condition true");
}
If you must have multiple exit points, put them first and make them clear:
if TerminatingCondition1 then
Exit
if TerminatingCondition2 then
Exit
Now we can progress with the usual stuff:
if NormalThing then
DoNormalThing
else
DoAbnormalThing