Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Are there any performance differences between using if-else and case statements when handling multiple conditions?
Which is preferred?
Use the one that's most readable in the given context.
In some languages, like C, switch may possibly be faster because it's usually implemented with a jump table. Modern compilers sometimes are smart enough to use one for several ifs as well, though.
And anyway it probably won't matter, micro optimizations are (almost) never useful.
When you have more than one ifelse, I recommend switch/select. Then again, it doesn't always work.
Suppose you something like that (not recommended but for example only)
If a > 0 and b > 0 then
' blabla
ElseIf b = -5 then
' blabla2
ElseIf a = -3 and b = 6 then
End If
Using a switch/select is NOT the way to go. However, when querying for a specific value for a variable like this
select case a
case 1:
' blabla
case 2:
' blabla2
case 3:
' blabla3
case 4:
' blabla4
case else:
end select
In those case, I highly recommend it because it is more readable for other people.
Some programming languages restrict when you can use switch/case statements. For example, in many C-like languages the case values must be constant integers known at compile time.
Performance characteristics may differ between the two techniques, but you're unlikely to be able to predict in advance what they are. If performance is really critical for this code in your application, make sure you profile both approaches before deciding, as the answers may surprise you.
Normally, performance differences will be negligible, and you should therefore choose the most readable, understandable and maintainable code, as you should in pretty much any other programming situation.
case or switch statements are really just special cases of "if .. elseif..." structures that you can use when the same object is being compared to a different simple value in every branch, and that is all that is being done. The nice thing about using them is that most compilers can implement them as jump tables, so effectively the entire 200 (or however many) branch checks can be implemented as a single table indexing operation, and a jump.
For that reason, you'd want to use a case statement when you
can and
have a fairly large number of branches.
The larger the number of "elseif"s, the more attractive a case statement is.
Case statements are generally preferred for readability and are generally faster if there is any speed difference, but this does not apply to every possible environment.
You could probably write a test that shows which is faster for your environment, but be careful with caching and compiler optimizations.
I will add to some of the answers here. This is not a performance question, and if you are really concerned about performance... write a test and see which is faster.
However, this should be a question about which is proper to use, not which is better. If you have multiple if/else statements then do yourself a favor and use a case statement. If it is a simple if/else then use an if/else. You'll thank yourself later.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've been working on this project for quite some time and it has gotten to be about 2000 lines long. It was done in such a way that it just worked, but would be an absolute nightmare for someone to read (apart from me). So I set out to modularise the code and make it generally easier to understand, In doing so it is now nearly 3000 lines!
It accomplishes the same goal in the end, but I made the flow of operations more intuitive and in a easier to modify (you would struggle to alter anything and have it still work in the previous version).
So my question is this: Which is better? I often here people say if you can do the same thing in less lines then it tends to be better, but the programmer friendly aspect of it is important as well.
I might actually see if the smaller one runs faster or not by timing them, that might be interesting. I'm pretty sure the second version is bigger because of the new design, not just added white-space.
It depends.
When performance matters, some obscure code may be required.
Otherwise, more understandable code is better, especially when working with other people. After all, code is read much more often than it is written.
More code and more clear. Clarity should always come first. Code has a twofold function, for a machine to execute it and for humans to read and understand it. The former is nearly useless without the latter.
undoubtedly, the code should be fairly easy to read not to hamper when you make future changes
For any compiled language, things like comments, whitespace and variable names don't matter in the end, so use those as a good tool to promote clarity. For bits of code that you think will run faster but look more confusing, use this to your advantage.
Also consider how many times the piece of code is being passed over in run time, and if it's a lot, how much extra time the new code structure will take. In most cases, the computer will actually run the same amount of operations for two different structures of code, for example:
// Code Snippet 1...
foo = (bar == true) ? 'Yes' : 'No';
// Code Snippet 2...
if(bar == true) {
foo = 'Yes';
} else {
foo = 'No';
}
Hope that helped!
Clarity is important so people can validate and/or modify the code correctly, but that doesn't necessarily mean "more" code.
My feeling is that often less code is clearer.
Regardless, the comments should be clear and to-the-point, not blathering on and being ignored.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've always heard about a single exit-point function as a bad way to code because you lose readability and efficiency. I've never heard anybody argue the other side.
I thought this had something to do with CS but this question was shot down at cstheory stackexchange.
There are different schools of thought, and it largely comes down to personal preference.
One is that it is less confusing if there is only a single exit point - you have a single path through the method and you know where to look for the exit. On the minus side if you use indentation to represent nesting, your code ends up massively indented to the right, and it becomes very difficult to follow all the nested scopes.
Another is that you can check preconditions and exit early at the start of a method, so that you know in the body of the method that certain conditions are true, without the entire body of the method being indented 5 miles off to the right. This usually minimises the number of scopes you have to worry about, which makes code much easier to follow.
A third is that you can exit anywhere you please. This used to be more confusing in the old days, but now that we have syntax-colouring editors and compilers that detect unreachable code, it's a lot easier to deal with.
I'm squarely in the middle camp. Enforcing a single exit point is a pointless or even counterproductive restriction IMHO, while exiting at random all over a method can sometimes lead to messy difficult to follow logic, where it becomes difficult to see if a given bit of code will or won't be executed. But "gating" your method makes it possible to significantly simplify the body of the method.
My general recommendation is that return statements should, when practical, either be located before the first code that has any side-effects, or after the last code that has any side-effects. I would consider something like:
if (!argument) // Check if non-null
return ERR_NULL_ARGUMENT;
... process non-null argument
if (ok)
return 0;
else
return ERR_NOT_OK;
clearer than:
int return_value;
if (argument) // Non-null
{
.. process non-null argument
.. set result appropriately
}
else
result = ERR_NULL_ARGUMENT;
return result;
If a certain condition should prevent a function from doing anything, I prefer to early-return out of the function at a spot above the point where the function would do anything. Once the function has undertaken actions with side-effects, though, I prefer to return from the bottom, to make clear that all side-effects must be dealt with.
With most anything, it comes down to the needs of the deliverable. In "the old days", spaghetti code with multiple return points invited memory leaks, since coders that preferred that method typically did not clean up well. There were also issues with some compilers "losing" the reference to the return variable as the stack was popped during the return, in the case of returning from a nested scope. The more general problem was one of re-entrant code, which attempts to have the calling state of a function be exactly the same as its return state. Mutators of oop violated this and the concept was shelved.
There are deliverables, most notably kernels, which need the speed that multiple exit points provide. These environments normally have their own memory and process management, so the risk of a leak is minimized.
Personally, I like to have a single point of exit, since I often use it to insert a breakpoint on the return statement and perform a code inspect of how the code determined that solution. I could just go to the entrance and step through, which I do with extensively nested and recursive solutions. As a code reviewer, multiple returns in a function requires a much deeper analysis - so if you're doing it to speed up the implementation, you're robbing Peter to save Paul. More time will be required in code reviews, invalidating the presumption of efficient implementation.
-- 2 cents
Please see this doc for more details: NISTIR 5459
Single entry and exit point was original concept of structured programming vs step by step Spaghetti Coding. There is a belief that multiple exit-point functions require more code since you have to do proper clean up of memory spaces allocated for variables. Consider a scenario where function allocates variables (resources) and getting out of the function early and without proper clean up would result in resource leaks. In addition, constructing clean-up before every exit would create a lot of redundant code.
I used to be an advocate of single-exit style. My reasoning came mostly from pain...
Single-exit is easier to debug.
Given the techniques and tools we have today, this is a far less reasonable position to take as unit tests and logging can make single-exit unnecessary. That said, when you need to watch code execute in a debugger, it was much harder to understand and work with code containing multiple exit points.
This became especially true when you needed to interject assignments in order to examine state (replaced with watch expressions in modern debuggers). It was also too easy to alter the control flow in ways that hid the problem or broke the execution altogether.
Single-exit methods were easier to step through in the debugger, and easier to tease apart without breaking the logic.
In my view, the advice to exit a function (or other control structure) at only one point often is oversold. Two reasons typically are given to exit at only one point:
Single-exit code is supposedly easier to read and debug. (I admit that I don't think much of this reason, but it is given. What is substantially easier to read and debug is single-entry code.)
Single-exit code links and returns more cleanly.
The second reason is subtle and has some merit, especially if the function returns a large data structure. However, I wouldn't worry about it too much, except ...
If a student, you want to earn top marks in your class. Do what the instructor prefers. He probably has a good reason from his perspective; so, at the very least, you'll learn his perspective. This has value in itself.
Good luck.
The answer is very context dependent. If you are making a GUI and have a function which initialises API's and opens windows at the start of your main it will be full of calls which may throw errors, each of which would cause the instance of the program to close. If you used nested IF statements and indent your code could quickly become very skewed to the right. Returning on an error at each stage might be better and actually more readable while being just as easy to debug with a few flags in the code.
If, however, you are testing different conditions and returning different values depending on the results in your method it may be much better practice to have a single exit point. I used to work on image processing scripts in MATLAB which could get very large. Multiple exit points could make the code extremely hard to follow. Switch statements were much more appropriate.
The best thing to do would be to learn as you go. If you are writing code for something try finding other people's code and seeing how they implement it. Decide which bits you like and which bits you don't.
If you feel like you need multiple exit points in a function, the function is too large and is doing too much.
I would recommend reading the chapter about functions in Robert C. Martin's book, Clean Code.
Essentially, you should try to write functions with 4 lines of code or less.
Some notes from Mike Long’s Blog:
The first rule of functions: they should be small
The second rule of functions: they should be smaller than that
Blocks within if statements, while statements, for loops, etc should be one line long
…and that line of code will usually be a function call
There should be no more than one or maybe two levels of indentation
Functions should do one thing
Function statements should all be at the same level of abstraction
A function should have no more than 3 arguments
Output arguments are a code smell
Passing a boolean flag into a function is truly awful. You are by definition doing two --things in the function.
Side effects are lies.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
How can the repetition of a very short development cycle help to remove bugs from your software? What bugs is TDD most effective in catching, when implemented correctly? And why?
Thanks in advance!
TDD forces you to think from the perspective of "consuming" the code you are going to write. This point of view helps to place you (the developer) into a position where you need to think about how your API will be structured as well as how you would verify the requirements of the implementation.
TDD helps identify defects in areas like:
Requirements. Is it clear what the code will need to do. Is it possible to verify the invariants or end-effects of the code. Are the success criteria defined in the requirements or are they vague or absent.
Ease of Use. Can you effectively use the code you plan on writing to achieve the kinds of things that are needed by end users, or by other code that it will integrate with in the future.
Testability. Is the code verifiable based on the accessible data or objects in the design. Is it possible to confirm that things will function as they should.
Edge Cases. It's often easier to identify and respond to edge cases by identifying their existance up-front. WHen edge cases crop up late in the game, there's an inclination to try to "force" the existing design to accomodate them, rather than rethinking the design.
Exception Handling. When you start writing tests cases, you begin to realize areas where you may want to be able respond to errors or exceptional conditions. This can help plan your exception handling strategy, the kinds of exceptions to throw, what information to include, etc.
TDD also help to improve the level of coverage in tests because it brings testing to the foreground, rather than making it an "after the fact" activity. When testing happens last, it is most prone to be omitted or short-shrifted due to time and budget constraints, or due to the natural drop in enthusiasm and motivation on the part of the developer.
Design "bugs": if you're generally doing TDD, you naturally end up with a testable design. In turn, that tends to reduce coupling etc - leading to a code base which is simply easier to work with.
Also, I've found TDD can make it easier to think about corner cases in certain situations - but the design benefit is more important, IMO.
The null- or zero-valued parameter case is for me the bug most differentially caught by TDD. I tend to write my tests first with this case, just as a way of flushing out the API, without regard for the value: "Oh, just toss a null in there; we'll put a real value in the next test." So my method is initially written to handle that particular edge case, and repeatedly running that test (along with all the others) throughout the red-green-refactor process keeps that edge case working right. Before using TDD, I would forget about null or zero parameters fairly frequently; now, without really trying, they're handled as a natural consequence of the way I apply TDD.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Codebase size has a lot to do with complexity of a software system (the higher the size the higher the costs for maintenance and extensions). A way to map codebase size is the simple 'lines of code (LOC)' metric (see also blog-entry 'implications of codebase-size').
I wondered how many of you out there are using this metric as a part for retrospective to create awareness (for removing unused functionality or dead code). I think creating awareness that more lines-of-code mean more complexity in maintenance and extension can be valuable.
I am not taking the LOC as fine grained metric (on method or function level), but on subcomponent or complete product level.
I find it a bit useless. Some kinds of functions - user input handling, for example , are going to be a bit long winded no matter what. I'd much rather use some form of complexity metric. Of course, you can combine the two, and/or any other metrics that take your fancy. All you need is a good tool - I use Source Monitor (with whom I have no relationship other than satisfied user) which is free and can do you both LOC and complexity metrics.
I use SM when writing code to make me notice methods that have got too complex. I then go back and take a look at them. About half the time I say, OK, that NEEDS to be that complicated. What I'd really like is (free) tool as good as SM but which also supports a tag list of some sort which says "ignore methods X,Y & Z - they need to be complicated". But I guess that could be dangerous, which is why I have so far not suggested the feature to SM's author.
I'm thinking it could be used to reward the team when the LOC decreases (assuming they are still producing valuable software and readable code...).
Not always true. While it is usually preferable to have a low LOC, it doesn't mean the code is any less complex. In fact, its usually more-so. Code thats been optimized to get the minimal number of cycles can be completely unreadable, even by the person who wrote it a week later.
As an example from a recent project, imagine setting individual color values (RGBa) from a PNG file. You can do this a bunch of ways, the most compact being just 1 line using bitshifts. This is a lot less readable and maintainable then another approach, such as using bitfields, which would take a structure definition and many more lines.
It also depends on the tool doing the LOC calculations. Does it consider lines with just a single symbol on them as code (Ex: { and } in C-style languages)? That definitely doesn't make it more complex, but does make it more readable.
Just my two cents.
LOCs are easy to obtain and deliver reasonable information whithin one not trivial project. My first step in a new project is always counting LOCs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Where would Erlang fall on the spectrum of conciseness between, say, Java/.net on the less concise end and Ruby/Python on the more concise end of the spectrum? I have an RSI problem so conciseness is particularly important to me for health reasons.
Conciseness as a language feature is ill defined and probably not uniform. Different languages could be more or less concise depending on the problem.
Erlang as a functional language can be very concise, beyond Ruby or Python. Specifically pattern matching often replaces if statements and recursion and list comprehensions can replace loops.
For example Java would have something like this:
String foobar(int number){
if (number == 0) {
return "foo";
} else if (number == 1) {
return "bar";
}
throw new Exception();
}
while Erlang code would look like:
foobar(0) -> "foo";
foobar(1) -> "bar".
With the exception being inherent because there is no clause for input other then 0 or 1. This is of cause a problem that lends itself well to Erlang style development.
In general anything you could define as a transformation will match a functional language particularly well and can be written very concise. Of cause many functional language zealots state that any problem in programming is a transformation.
Erlang allows you to realize functionallity in very few lines of code, compared to my experiences in Java and Python. Only Smalltalk or Scheme came near for me in the past. You've get only little overhead, but you typically tend to speaking identifiers for modules, functions, variables, and atoms. They make the code more readable. And you've got lot's of normal, curly, and square braces. So it depends on your keyboard layout how comfortable it will be. You should give it a try.
mue
Erlang is surprisingly concise especially when you want achieve performance and reliability.
Erlang is concise even when compared to Haskell:
http://thinkerlang.com/2006/01/01/haskell-vs-erlang-reloaded.html
And is surprisingly fast (and reliable) even when compared to C++:
http://www.erlang.se/euc/06/proceedings/1600Nystrom.ppt
(18x less SLOC is not surprise).
Anyway it always depends of your preferences and goal what you want achieve.
You have to spend some time, write code, to understand erlang's sweet spot, vs. all the other emerging tools, DHT, doc stores, mapreduce frameworks, hadoop, GPU, scala, ... If you try to do, say SIMD type apps outside the sweet spot, you'll probably end up fighting the paradigm and writing verbose code, whereas if you hit problems that need to scale servers and middleware seamlessly up and down, it flows naturally. (And the rise of scala in its sweet spot is inevitable, too, I think)
A good thing to look up would be the Tim Bray Wide Finder experiment (distilling big apache log files) from a couple years ago, and how he was disappointed with erlang.
I generally don't recommend putting much store in the Alioth shootout, given you inevitably end up comparing really good and bad code, but if you need to put numbers of LOC, erlang vs. C, ruby, whatever
https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/erlang.html