Related
is there anyway that i can debug infinite recursion errors fast?I get an infinite recursion error and i know that it happens because a base case is missing so its executing itself so many times that the call stack is exceeded. But my problem is with finding where exactly its missing the base case and to do so by just steping out and in using the developer tool debugger in the browser will take hours. So is there anyway that i can do it fast and jump to exactly where the base case is missing?
(pause on exeption doesnt work for the recursion)
No, there is no way to do what you want to do. If you had an infinite while loop, you wouldn't expect the computer to magically tell you when the loop should have ended because the computer has no idea what you want. Similarly, if you have infinite recursion, there's no way for the computer to tell you where the recursion should have ended because the computer has no idea what you want.
There are some general tips for both programming and recursion which will simplify the task.
First, always try to simplify your code as much as possible. If your code is too complicated, try separating the code into helper functions and documenting exactly what the code should do using documentation comments or doc strings, and test each function one at a time. If you're writing a basic factorial function, it's almost inconceivable that you would miss the base case if you have any practice in recursion because the code is so short. But if you're writing something very complicated (ie 10+ lines), it's easy for mistakes to slip through the cracks.
Second, you should make sure that you have a proper notion of the "size of the input" and ensure that whenever you make a recursive call, you make the call on a smaller input. Remember that "size" must be measured by an unsigned integer (aka natural number) - you can't let size go negative, and you can't let size be a fraction. As long as you do these checks, your recursion will always terminate.
I am using if else in my python code. I am wondering how many if else i can add without impacting perfomance of my script.
after what number of if else statements, performance of my script goes down??
the number of them doesn't really matter. It's what's inside them that will decide on performance. The time it takes to evaluate an if else statement is really really tiny. Plus there are tricks the compiler uses to make it even faster, so you really shouldn't worry about it.
This is a core question
please don't say with regard to syntax or semantics,
the question is that what is the actual difference between
WHILE loop and FOR loop, everything written in for loop can be done
with while loop then why two loops?
This is asked in a seminar at the university of Cambridge.
so i think we have to explain in terms of performance overheads and WC complexity.
I think we have to go in terms of Floyd-Hoare logic
As far as performance overheads go, that will depend on the compiler and language you're using.
The major difference between them is that for loops are easier to read when iterating over a collection of something, and while loops are easier to read when a specific condition or boolean flag is being evaluated.
The only difference between the two looping constructs is the ability to do pre-loop initialization and post-loop changes in the header of the for loop. There is no performance difference between
while (condition) {
...
}
and
for ( ; condition ; ) {
...
}
constructs. The alternative is added for convenience and readability, there are no other implications.
The difference is that a for() loop condenses a number of steps into a single line. Ultimately, all answers will boil down to syntax or semantics in this way, so there can't be an answer that really pleases you.
A while() loop only states the condition under which the loop will terminate, while the for() loop can stipulate a variable to measure by as well as a counter in addition to this.
(Actually, for() loops are very versatile and a lot can happen in their declarations to make them pretty fancy, but the above statement sums up almost all of the for() loops you will ever see, even in a production environment.)
One occasionally relevant difference between the two is that a while loop can have a more complex condition imposed on it than a for loop
The FOR loop is best exployed when looping through a collection of predictable size
This I guess would be considered syntactical, but I consider it to refute the concept that the for loop can do anything the while loop can, though the reverse is indeed correct
There is obviously no need for two loops, you need only one. And indeed, many textbooks consider a language (usually named Imp) which desugars for to while with an initialization statement before the while.
That being said, if you try working out the difference in the loop invariants, and associated rules in Hoare logic for these two loops, they differ just a bit, because of the init block. Since this looks like homework, I'll let you work out the details.
So the answer is: "It makes things just a little easier, on the syntax and proof side, but it's merely cosmetic, for all intents an purposes you really only need one..."
So far none of the other answers point out what I think the really important part of the distinction is for your question: which is probably that they change things a little bit with the Hoare connectives...
Also, note that speculating on performance based on Hoare logic is silly. If you care about the semantics, your professor probably doesn't give a damn or want to hear about performance :-)
everything can be done with if-goto statement. so why any loops at all? it's a syntax sugar to make developer's life easier and to allow writing better structured (smaller and more readable) code. it has nothing to do with performance
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've always heard about a single exit-point function as a bad way to code because you lose readability and efficiency. I've never heard anybody argue the other side.
I thought this had something to do with CS but this question was shot down at cstheory stackexchange.
There are different schools of thought, and it largely comes down to personal preference.
One is that it is less confusing if there is only a single exit point - you have a single path through the method and you know where to look for the exit. On the minus side if you use indentation to represent nesting, your code ends up massively indented to the right, and it becomes very difficult to follow all the nested scopes.
Another is that you can check preconditions and exit early at the start of a method, so that you know in the body of the method that certain conditions are true, without the entire body of the method being indented 5 miles off to the right. This usually minimises the number of scopes you have to worry about, which makes code much easier to follow.
A third is that you can exit anywhere you please. This used to be more confusing in the old days, but now that we have syntax-colouring editors and compilers that detect unreachable code, it's a lot easier to deal with.
I'm squarely in the middle camp. Enforcing a single exit point is a pointless or even counterproductive restriction IMHO, while exiting at random all over a method can sometimes lead to messy difficult to follow logic, where it becomes difficult to see if a given bit of code will or won't be executed. But "gating" your method makes it possible to significantly simplify the body of the method.
My general recommendation is that return statements should, when practical, either be located before the first code that has any side-effects, or after the last code that has any side-effects. I would consider something like:
if (!argument) // Check if non-null
return ERR_NULL_ARGUMENT;
... process non-null argument
if (ok)
return 0;
else
return ERR_NOT_OK;
clearer than:
int return_value;
if (argument) // Non-null
{
.. process non-null argument
.. set result appropriately
}
else
result = ERR_NULL_ARGUMENT;
return result;
If a certain condition should prevent a function from doing anything, I prefer to early-return out of the function at a spot above the point where the function would do anything. Once the function has undertaken actions with side-effects, though, I prefer to return from the bottom, to make clear that all side-effects must be dealt with.
With most anything, it comes down to the needs of the deliverable. In "the old days", spaghetti code with multiple return points invited memory leaks, since coders that preferred that method typically did not clean up well. There were also issues with some compilers "losing" the reference to the return variable as the stack was popped during the return, in the case of returning from a nested scope. The more general problem was one of re-entrant code, which attempts to have the calling state of a function be exactly the same as its return state. Mutators of oop violated this and the concept was shelved.
There are deliverables, most notably kernels, which need the speed that multiple exit points provide. These environments normally have their own memory and process management, so the risk of a leak is minimized.
Personally, I like to have a single point of exit, since I often use it to insert a breakpoint on the return statement and perform a code inspect of how the code determined that solution. I could just go to the entrance and step through, which I do with extensively nested and recursive solutions. As a code reviewer, multiple returns in a function requires a much deeper analysis - so if you're doing it to speed up the implementation, you're robbing Peter to save Paul. More time will be required in code reviews, invalidating the presumption of efficient implementation.
-- 2 cents
Please see this doc for more details: NISTIR 5459
Single entry and exit point was original concept of structured programming vs step by step Spaghetti Coding. There is a belief that multiple exit-point functions require more code since you have to do proper clean up of memory spaces allocated for variables. Consider a scenario where function allocates variables (resources) and getting out of the function early and without proper clean up would result in resource leaks. In addition, constructing clean-up before every exit would create a lot of redundant code.
I used to be an advocate of single-exit style. My reasoning came mostly from pain...
Single-exit is easier to debug.
Given the techniques and tools we have today, this is a far less reasonable position to take as unit tests and logging can make single-exit unnecessary. That said, when you need to watch code execute in a debugger, it was much harder to understand and work with code containing multiple exit points.
This became especially true when you needed to interject assignments in order to examine state (replaced with watch expressions in modern debuggers). It was also too easy to alter the control flow in ways that hid the problem or broke the execution altogether.
Single-exit methods were easier to step through in the debugger, and easier to tease apart without breaking the logic.
In my view, the advice to exit a function (or other control structure) at only one point often is oversold. Two reasons typically are given to exit at only one point:
Single-exit code is supposedly easier to read and debug. (I admit that I don't think much of this reason, but it is given. What is substantially easier to read and debug is single-entry code.)
Single-exit code links and returns more cleanly.
The second reason is subtle and has some merit, especially if the function returns a large data structure. However, I wouldn't worry about it too much, except ...
If a student, you want to earn top marks in your class. Do what the instructor prefers. He probably has a good reason from his perspective; so, at the very least, you'll learn his perspective. This has value in itself.
Good luck.
The answer is very context dependent. If you are making a GUI and have a function which initialises API's and opens windows at the start of your main it will be full of calls which may throw errors, each of which would cause the instance of the program to close. If you used nested IF statements and indent your code could quickly become very skewed to the right. Returning on an error at each stage might be better and actually more readable while being just as easy to debug with a few flags in the code.
If, however, you are testing different conditions and returning different values depending on the results in your method it may be much better practice to have a single exit point. I used to work on image processing scripts in MATLAB which could get very large. Multiple exit points could make the code extremely hard to follow. Switch statements were much more appropriate.
The best thing to do would be to learn as you go. If you are writing code for something try finding other people's code and seeing how they implement it. Decide which bits you like and which bits you don't.
If you feel like you need multiple exit points in a function, the function is too large and is doing too much.
I would recommend reading the chapter about functions in Robert C. Martin's book, Clean Code.
Essentially, you should try to write functions with 4 lines of code or less.
Some notes from Mike Long’s Blog:
The first rule of functions: they should be small
The second rule of functions: they should be smaller than that
Blocks within if statements, while statements, for loops, etc should be one line long
…and that line of code will usually be a function call
There should be no more than one or maybe two levels of indentation
Functions should do one thing
Function statements should all be at the same level of abstraction
A function should have no more than 3 arguments
Output arguments are a code smell
Passing a boolean flag into a function is truly awful. You are by definition doing two --things in the function.
Side effects are lies.
What is the name of the following method/technique (I'll try to describe the best I could, background on "memoization" is probably needed to understand why this technique can be very useful):
You start some potentially lenghty asynchronous computation and you realize that an identical computation has already been started but is not done yet and you "piggyback" on the first computation. Then when the first computation ends, it issues not one but two callbacks.
The goal is to not needlessly start a second computation because you know that there's already an identical computation running.
Note that altough not entirely dissimilar, I'm not looking for the particular case of caching that "memoization" is: memoization is when you start a computation and find a cached (memoized) result of that same computation that is already done that you can reuse.
Here I'm looking for the name of the technique that is in a way a bit similar to memoization (in that it is can be useful for some of the same reasons that memoization is a useful technique), except that it reuses the result of the first computation even if the first computation is not done yet at the time you issue the second computation.
I've always called that technique "piggybacking" but I don't know if this is correct.
I've actually used this more than once as some kind of "memoization on steroids" and it came very handy.
I just don't know what the name of this (advanced ?) technique is.
EDIT
Damn, I wanted to comment on epatel's answer but it disappeared. epatel's answer gave me an idea, this technique could be called "lazy memoization" :)
This is just memoization of futures.
Normal "eager" memoization works like this:
f_memo(x):
critical_section:
if (exists answers(f,x))
return answers(f,x)
else
a = f(x)
answers(f,x) = a
return a
Now if f(x) returns futures instead of actual results, the above code works as is. You get the piggyback effect, i.e. like this:
First thread calls f(3)
There is no stored answer for f(3), so in the critical section there's a call to f(3). f(3) is implemented as returning a future, so the 'answer' is ready immediately; 'a' in the code above is set to the future F and the future F is stored in the answers table
The future F is returned as the "result" of the call f(3), which is potentially still ongoing
Another thread calls f(3)
The future F is found from the table, and returned immediately
Now both threads have handle to the result of the computation; when they try to read it, they block until the computation is ready---in the post this communication mechanism was mentioned as being implemented by a callback, presumeably in a context where futures are less common
Sounds like a future: http://en.wikipedia.org/wiki/Future_%28programming%29
In some contexts, I've heard this called "Request Merging".
Sounds a little like Lazy Evaluation, but not exactly...