what is the performance of if-else statement in python - performance

I am using if else in my python code. I am wondering how many if else i can add without impacting perfomance of my script.
after what number of if else statements, performance of my script goes down??

the number of them doesn't really matter. It's what's inside them that will decide on performance. The time it takes to evaluate an if else statement is really really tiny. Plus there are tricks the compiler uses to make it even faster, so you really shouldn't worry about it.

Related

If Statement Vs. while loop

I readied some where that while loops are faster than if Statement;
Doesn't That Make
while(something)
{
//the code
break;
}
faster and better than a normal If Statement ??!
If true why people uses the if statements 'till now <-- I had to use a while loop right there : )
Thanks.
It is generally not faster in natively compiled languages like C, C++, Rust, etc. since the optimizing compilers can rewrite this anyway but basic one may not (resulting in more branches. It is bug prone due to the break and does not make the intent clear resulting in a less readable code. Put it shortly: do not use this code to emulate conditions. Instead, write clean codes, check that the compiler does its optimizing job correctly, and only then, use micro-optimization to speed up your code if your profiler show this is needed and worth it. As Donald Knuth said once: Premature optimization is the root of all evil.

Does the order of if-conditions inside a loop influence execution speed?

In order to optimize for execution speed, it is recommended to avoid using if-conditions inside a loop. Imagine there is a case where loop unswitching is not possible, but information (or an estimate) about the frequency of the conditions is available.
Does the order of conditions in an if-else-statement influence execution speed?
Assume I now (or estimate) condition A occurs 80%, B occurs 15% and C only 5% and the times to calculate the conditions is equal. Would it the better to write the loop like this or does the order makes no difference in execution time?
for (i in N){
if (A(i))
foo(i)
else if (B(i))
bar(i)
else
foobar(i)
}
Are there any best practices regarding order of conditions? Is it language dependent? What if the time to evaluate conditions is not equal? Would it be the best to order the conditions from cheapest to most expensive in that case?
In theory, placing the most likely branch first should be fastest, because less conditions need to be checked. To exchange the order of checks, it is important that for all inputs i, exactly one of the possible conditions is fulfilled.
In practice, you will probably not be able to tell the difference in the performance, because of branch prediction, if you are using a compiled language. A great explanation was provided here. For interpreted languages, this aspect has probably no impact, because the interpreter needs anyway full instructions to read the text line so that the pipelining can't kick in anyway.
If your language features a switch statement, you should probably use it, because there the compiler knows better what it got, so it can pull out maybe some more tricks

Difference between two types of iterations

This is a core question
please don't say with regard to syntax or semantics,
the question is that what is the actual difference between
WHILE loop and FOR loop, everything written in for loop can be done
with while loop then why two loops?
This is asked in a seminar at the university of Cambridge.
so i think we have to explain in terms of performance overheads and WC complexity.
I think we have to go in terms of Floyd-Hoare logic
As far as performance overheads go, that will depend on the compiler and language you're using.
The major difference between them is that for loops are easier to read when iterating over a collection of something, and while loops are easier to read when a specific condition or boolean flag is being evaluated.
The only difference between the two looping constructs is the ability to do pre-loop initialization and post-loop changes in the header of the for loop. There is no performance difference between
while (condition) {
...
}
and
for ( ; condition ; ) {
...
}
constructs. The alternative is added for convenience and readability, there are no other implications.
The difference is that a for() loop condenses a number of steps into a single line. Ultimately, all answers will boil down to syntax or semantics in this way, so there can't be an answer that really pleases you.
A while() loop only states the condition under which the loop will terminate, while the for() loop can stipulate a variable to measure by as well as a counter in addition to this.
(Actually, for() loops are very versatile and a lot can happen in their declarations to make them pretty fancy, but the above statement sums up almost all of the for() loops you will ever see, even in a production environment.)
One occasionally relevant difference between the two is that a while loop can have a more complex condition imposed on it than a for loop
The FOR loop is best exployed when looping through a collection of predictable size
This I guess would be considered syntactical, but I consider it to refute the concept that the for loop can do anything the while loop can, though the reverse is indeed correct
There is obviously no need for two loops, you need only one. And indeed, many textbooks consider a language (usually named Imp) which desugars for to while with an initialization statement before the while.
That being said, if you try working out the difference in the loop invariants, and associated rules in Hoare logic for these two loops, they differ just a bit, because of the init block. Since this looks like homework, I'll let you work out the details.
So the answer is: "It makes things just a little easier, on the syntax and proof side, but it's merely cosmetic, for all intents an purposes you really only need one..."
So far none of the other answers point out what I think the really important part of the distinction is for your question: which is probably that they change things a little bit with the Hoare connectives...
Also, note that speculating on performance based on Hoare logic is silly. If you care about the semantics, your professor probably doesn't give a damn or want to hear about performance :-)
everything can be done with if-goto statement. so why any loops at all? it's a syntax sugar to make developer's life easier and to allow writing better structured (smaller and more readable) code. it has nothing to do with performance

When should I consider the performance impact of a function call?

In a recent conversation with a fellow programmer, I asserted that "if you're writing the same code more than once, it's probably a good idea to refactor that functionality such that it can be called once from each of those places."
My fellow programmer buddy instead insisted that the performance impact of making these function calls was not acceptable.
Now, I'm not looking for validation of who was right. I'm simply curious to know if there are situations or patterns where I should consider the performance impact of a function call before refactoring.
"My fellow programmer buddy instead insisted that the performance impact of making these function calls was not acceptable."
...to which the proper answer is "Prove it."
The old saw about premature optimization applies here. Anyone who isn't familiar with it needs to be educated before they do any more harm.
IMHO, if you don't have the attitude that you'd rather spend a couple hours writing a routine that can be used for both than 10 seconds cutting and pasting code, you don't deserve to call yourself a coder.
Don't even consider the effect of calling overhead if the code isn't in a loop that's being called millions of times, in an area where the user is likely to notice the difference. Once you've met those conditions, go ahead and profile to see if your worries are justified.
Modern compilers of languages such as Java will inline certain function calls anyway. My opinion is that the design is way more important over the few instructions spent with function call. The only situation I can think about would be writing some really fine tuned code in assembler.
You need to ask yourself several questions:
Cost of time spent on optimizing code vs cost of throwing more hardware at it.
How does this impact maintainability?
How does going in either direction impact your deadline?
Does this really beg optimization when many modern compilers will do it for you anyway? Do not try to outsmart the compiler.
And of course, which will help you sleep better at night? :)
My bet is that there was a time in which the performance cost of a call to an external method or function WAS something to be concerned with, in the same way that the lengths of variable names and such all needed to be evaluated with respect to performance implications.
With the monumental increases in processor speed and memory resources int he last two decades, I propose that these concerns are no longer as pertinent as they once were.
We have been able use long variable names without concern for some time, and the cost of a call to external code is probably negligible in most cases.
There might be exceptions. If you place a function call within a large loop, you may see some impact, depending upon the number of iterations.
I propose that in most cases you will find that refactoring code into discrete function calls will have a negligible impact. There might be occasions in which there IS an impact. However, proper TESTING of a refactoring will reveal this. In those minority of cases, your friend might be correct. For most of the rest of the time, I propose that your friend is clining a little to closely to practices which pre-date most modern processors and storage media.
You care about function call overhead the same time you care about any other overhead: when your performance profiling tool indicates that it's a problem.
for the c/c++ family:
the 'cost' of the call is not important. if it needs to be fast, you just have to make sure the compiler is able to inline it. that means that:
the body must be visible to the compiler
the body is indeed small enough to be considered an inline candidate.
the method does not require dynamic dispatch
there are a few ways to break this default ability. for example:
huge instruction count already in the callsite. even with early inlining, the compiler may pop a trivial function out of line (even though it could generate more instructions/slower execution). early inlining is the compiler's ability to inline a function early on, when it sees the call costs more than the inline.
recursion
the inline keyword is more or less useless in this era, regarding its original intent. however, many compilers offer a means to restore the meaning, with a compiler specific directive. using this directive (correctly) helps considerably. learning how to use it correctly takes time. if in doubt, omit the directive and leave it up to the compiler.
assuming you are using a modern compiler, there is no excuse to avoid the function, unless you're also willing to go down to assembly for this particular program.
as it stands, and if performance is crucial, you really have two choices:
1) learn to write well organized programs for speed. downside: longer compile times
2) maintain a poorly written program
i prefer 1. any day.
(yes, i have spent a lot of time writing performance critical programs)

Do many old ColdFusion Performance admonitions still apply in CFMX 8?

I have an old standards document that has gone through several iterations and has its roots back in the ColdFusion 5 days. It contains a number of admonitions, primarily for performance, that I'm not so sure are still valid.
Do any of these still apply in ColdFusion MX 8? Do they really make that much difference in performance?
Use compare() or compareNoCase() instead of is not when comparing strings
Don't use evaluate() unless there is no other way to write your code
Don't use iif()
Always use struct.key or struct[key] instead of structFind(struct,key)
Don't use incrementValue()
I agree with Tomalak's thoughts on premature optimization. Compare is not as readable as "eq."
That being said there is a great article on the Adobe Developer Center about ColdFusion Performance: http://www.adobe.com/devnet/coldfusion/articles/coldfusion_performance.html
Compare()/CompareNoCase(): comparing case-insensitively is more expensive in Java, too. I'd say this still holds true.
Don't use evaluate(): Absolutely - unless there's no way around it. Most of the time, there is.
Don't use Iif(): I can't say much about this one. I don't use it anyway because the whole DE() stuff that comes with it sucks so much.
struct.key over StructFind(struct,key): I'd suspect that internally both use the same Java method to get a struct item. StructFind() is just one more function call on the stack. I've never used it, since I have no idea what benefit it would bring. I guess it's around for backwards compatibility only.
IncrementValue(): I've never used that one. I mean, it's 16 characters and does not even increment the variable in-place. Which would have been the only excuse for it's existence.
Some of the concerns fall in the "premature optimization" corner, IMHO. Personal preference or coding style apart, I would only start to care about some of the subtleties in a heavy inner loop that bogs down the app.
For instance, if you do not need a case-insensitive string compare, it makes no sense using CompareNoCase(). But I'd say 99.9% of the time the actual performance difference is negligible. Sure you can write a loop that times 100000 iterations of different operations and you'd find they perform differently. But in real-world situations these academic differences rarely make any measurable impact.
Coldfusion MX 8 is several times faster than MX 7 from all accounts. When it came out, I read many opinions that simply upgrading for the performance boost without changing a line of code was well worth it... It was worth it. With the gains in processing power, memory availability, generally, you can do a lot more with less optimized code.
Does this mean we should stop caring and write whatever? No. Chances are where we take the most shortcuts, we'll have to grow the system the most there.
Finding that find line between enough engineering and not over-engineering a solution is a fine balance. There's a quote there by Knuth I believe that says "Premature optimizations is the root of all evil"
For me, I try to base it on:
how much it will be used,
how expensive that will be across my expected user base,
how critical/central it is to everything,
how often I may be coming back to the code to extend it into other areas
The more that these types of ideas lie in the "probably or one way or another I will", I pay more attention to it. If it needs to be readable and a small performance hit results, it's the better way to go for sustainability of the code.
Otherwise, I let items fight for my attention while I solve and build things of real(er) value.
The single biggest favour we can do ourselves is use a framework with any project, no matter how small and do the small things right from the beginning.
That way there is no sense of dread in going back to work on a system that was originally meant to be a temporary hack but never got re-factored.

Resources