Debugging recursion is so hard - debugging

is there anyway that i can debug infinite recursion errors fast?I get an infinite recursion error and i know that it happens because a base case is missing so its executing itself so many times that the call stack is exceeded. But my problem is with finding where exactly its missing the base case and to do so by just steping out and in using the developer tool debugger in the browser will take hours. So is there anyway that i can do it fast and jump to exactly where the base case is missing?
(pause on exeption doesnt work for the recursion)

No, there is no way to do what you want to do. If you had an infinite while loop, you wouldn't expect the computer to magically tell you when the loop should have ended because the computer has no idea what you want. Similarly, if you have infinite recursion, there's no way for the computer to tell you where the recursion should have ended because the computer has no idea what you want.
There are some general tips for both programming and recursion which will simplify the task.
First, always try to simplify your code as much as possible. If your code is too complicated, try separating the code into helper functions and documenting exactly what the code should do using documentation comments or doc strings, and test each function one at a time. If you're writing a basic factorial function, it's almost inconceivable that you would miss the base case if you have any practice in recursion because the code is so short. But if you're writing something very complicated (ie 10+ lines), it's easy for mistakes to slip through the cracks.
Second, you should make sure that you have a proper notion of the "size of the input" and ensure that whenever you make a recursive call, you make the call on a smaller input. Remember that "size" must be measured by an unsigned integer (aka natural number) - you can't let size go negative, and you can't let size be a fraction. As long as you do these checks, your recursion will always terminate.

Related

How to analyze the computational overhead of a Processing Sketch

Please forgive me if the terms are inaccurate or this is the wrong forum.
Essentially I write sketches in Processing and I am struggling to find out why my code runs slow.
Sometimes a sketch runs fast and I have no idea why other than there are less lines of code. Sometimes a different sketch runs slow and I have no idea why.
I am curious if there is a way within the Processing IDE, or maybe a general tool, to determine or analyze which lines of code are causing the sketch to run slow?
Such as a way to isolate "Oh, these lines are causing it to run the slowest. Looks like it's a section of this loop. Maybe I should concentrate on improving this function rather than searching for a needle in a haystack."
Similar to how when my computer is running slow I can use task manager to take a look at which programs are running slow and then adjust. I don't just guess. Or develop an unfounded penchant for quitting one program over another.
I of course could upload my sketch but this is a example independent problem I am trying to get better at solving. I would like to find a way to analyze all sketches. Or even code in different languages, etc.
If there is no general tool for analyzing a Processing sketch how do people go about this? Surely there must be a better method than trial and error, brute force, intuition, or otherwise. Of course those methods could yield better running code but there must be some formal analysis.
Even if you didn't have anything specific to share for Processing any suggestions on search terms, subjects, or topics would be appreciated as I have no idea how to proceed other than the brute force/trial and error method.
Thank you,
Without an example code snippet I can only provide a couple of general approaches.
The simplest thing you could do is use time sections of code and add print statements (poor man's profiler). The idea is to take a time snapshot before running a function/block of code you suspect is slow, take another one after then print the difference. The chunks of code with the largest time difference is what you want to focus on. Here's a minimal example, assuming doSomethingInteresting() is a function that you suspect is slow
// get the current millis since the sketch started
int before = millis();
// call the function here
doSomethingInteresting();
// get the millis again, after the function and calculate the time difference
int diff = millis() - before;
println("executed in ~" + diff + "ms");
If you use Processing's default Java mode you can use VisualVM to sample your PApplet. In your case you will want to sample CPU usage and the nice thing about VisualVM is that you will the results sorted by the slowest function calls (the first you should improve) and you can drill down to see what is your code versus what is part of the runtime or other native parts of code.

Python Recursion Understanding Issue

I'm a freshman in cs, and I am having a little issue understanding the content of recursion in python.
I have 3 questions that I want to ask, or to confirm if I am understanding the content correctly.
1: I'm not sure the purpose of using base case in recursion. Is base case worked to terminate the recursion somewhere in my program?
2: Suppose I have my recursive call above any other code of my program. Is the program going to run the recursion first and then to run the content after recursion?
3: How to trace recursion properly with a reasonable correctness rate? I personally think it's really hard to trace recursion and I can barely find some instruction online.
Thanks for answering my questions.
Yes. The idea is that the recursive call will continue to call itself (with different inputs) until it calls the base case (or one of many base cases). The base case is usually a case of your problem where the solution is trivial and doesn't rely on any other parts of the problem. It then walks through the calls backwards, building on the answer it got for the simplest version of the question.
Yes. Interacting with recursive functions from the outside is exactly the same as interacting with any other function.
Depends on what you're writing, and how comfortable you are with debugging tools. If you're trying to track what values are getting passed back and forth between recursive calls, printing all the parameters at the start of the function and the return value at the end can take you pretty far.
The easiest way to wrap your head around this stuff is to write some of your own. Start with some basic examples (the classic is the factorial function) and ask for help if you get stuck.
Edit: If you're more math-oriented, you might look up mathematical induction (you will learn it anyway as part of cs education). It's the exact same concept, just taught a little differently.

what is the advantage of recursion algorithm over iteration algorithm?

I am having problem in understanding one thing that when recursion involves so much space as well as the time complexity of both the iterative algos and recursive algos are same unless I apply Dynamic programming to it , then why should we use recursion ,Is it mere for reducing the lines of code that we use this , since even for implementing recursion , a whole PCB has to be saved during passing of control of function from one call to another ?
Although I have seen many posts related to it but still it's not clear to me that what is the major advantage of implementing recursion over iteration ?
what is the major advantage of implementing recursion over iteration ?
Readability - don't neglect it. If the code is readable and simple - it will take less time to code it (which is very important in real life), and a simpler code is also easier to maintain (since in future updates, it will be easy to understand what's going on).
Now, I am not saying "Make everything recursive!", but if something is significantly more readable in a recursive solution - unless in production it makes the code suffer in performance in noticeable way - keep it!.
Performance. Yes, you heard me. Sometimes, to transform a recursive solution to an iterative one, you need more than a simple loop - you need a loop + a stack.
However, many times, your stack DS is not as efficient as the machine one, which is optimized exactly for this purpose by hundreds of employees in Intel/AMD.
This thread discusses a specific case, where a trivial conversion of an algorithm from recursive to iterative yields worse results, and in order to outperform the machine stack - you will need to invest lots of hours in optimizing your stack, and your time is a scarce resource.
Again, I am not saying - "recursion is always faster!". But in some situations, it could be.

Why should a function have only one exit-point? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've always heard about a single exit-point function as a bad way to code because you lose readability and efficiency. I've never heard anybody argue the other side.
I thought this had something to do with CS but this question was shot down at cstheory stackexchange.
There are different schools of thought, and it largely comes down to personal preference.
One is that it is less confusing if there is only a single exit point - you have a single path through the method and you know where to look for the exit. On the minus side if you use indentation to represent nesting, your code ends up massively indented to the right, and it becomes very difficult to follow all the nested scopes.
Another is that you can check preconditions and exit early at the start of a method, so that you know in the body of the method that certain conditions are true, without the entire body of the method being indented 5 miles off to the right. This usually minimises the number of scopes you have to worry about, which makes code much easier to follow.
A third is that you can exit anywhere you please. This used to be more confusing in the old days, but now that we have syntax-colouring editors and compilers that detect unreachable code, it's a lot easier to deal with.
I'm squarely in the middle camp. Enforcing a single exit point is a pointless or even counterproductive restriction IMHO, while exiting at random all over a method can sometimes lead to messy difficult to follow logic, where it becomes difficult to see if a given bit of code will or won't be executed. But "gating" your method makes it possible to significantly simplify the body of the method.
My general recommendation is that return statements should, when practical, either be located before the first code that has any side-effects, or after the last code that has any side-effects. I would consider something like:
if (!argument) // Check if non-null
return ERR_NULL_ARGUMENT;
... process non-null argument
if (ok)
return 0;
else
return ERR_NOT_OK;
clearer than:
int return_value;
if (argument) // Non-null
{
.. process non-null argument
.. set result appropriately
}
else
result = ERR_NULL_ARGUMENT;
return result;
If a certain condition should prevent a function from doing anything, I prefer to early-return out of the function at a spot above the point where the function would do anything. Once the function has undertaken actions with side-effects, though, I prefer to return from the bottom, to make clear that all side-effects must be dealt with.
With most anything, it comes down to the needs of the deliverable. In "the old days", spaghetti code with multiple return points invited memory leaks, since coders that preferred that method typically did not clean up well. There were also issues with some compilers "losing" the reference to the return variable as the stack was popped during the return, in the case of returning from a nested scope. The more general problem was one of re-entrant code, which attempts to have the calling state of a function be exactly the same as its return state. Mutators of oop violated this and the concept was shelved.
There are deliverables, most notably kernels, which need the speed that multiple exit points provide. These environments normally have their own memory and process management, so the risk of a leak is minimized.
Personally, I like to have a single point of exit, since I often use it to insert a breakpoint on the return statement and perform a code inspect of how the code determined that solution. I could just go to the entrance and step through, which I do with extensively nested and recursive solutions. As a code reviewer, multiple returns in a function requires a much deeper analysis - so if you're doing it to speed up the implementation, you're robbing Peter to save Paul. More time will be required in code reviews, invalidating the presumption of efficient implementation.
-- 2 cents
Please see this doc for more details: NISTIR 5459
Single entry and exit point was original concept of structured programming vs step by step Spaghetti Coding. There is a belief that multiple exit-point functions require more code since you have to do proper clean up of memory spaces allocated for variables. Consider a scenario where function allocates variables (resources) and getting out of the function early and without proper clean up would result in resource leaks. In addition, constructing clean-up before every exit would create a lot of redundant code.
I used to be an advocate of single-exit style. My reasoning came mostly from pain...
Single-exit is easier to debug.
Given the techniques and tools we have today, this is a far less reasonable position to take as unit tests and logging can make single-exit unnecessary. That said, when you need to watch code execute in a debugger, it was much harder to understand and work with code containing multiple exit points.
This became especially true when you needed to interject assignments in order to examine state (replaced with watch expressions in modern debuggers). It was also too easy to alter the control flow in ways that hid the problem or broke the execution altogether.
Single-exit methods were easier to step through in the debugger, and easier to tease apart without breaking the logic.
In my view, the advice to exit a function (or other control structure) at only one point often is oversold. Two reasons typically are given to exit at only one point:
Single-exit code is supposedly easier to read and debug. (I admit that I don't think much of this reason, but it is given. What is substantially easier to read and debug is single-entry code.)
Single-exit code links and returns more cleanly.
The second reason is subtle and has some merit, especially if the function returns a large data structure. However, I wouldn't worry about it too much, except ...
If a student, you want to earn top marks in your class. Do what the instructor prefers. He probably has a good reason from his perspective; so, at the very least, you'll learn his perspective. This has value in itself.
Good luck.
The answer is very context dependent. If you are making a GUI and have a function which initialises API's and opens windows at the start of your main it will be full of calls which may throw errors, each of which would cause the instance of the program to close. If you used nested IF statements and indent your code could quickly become very skewed to the right. Returning on an error at each stage might be better and actually more readable while being just as easy to debug with a few flags in the code.
If, however, you are testing different conditions and returning different values depending on the results in your method it may be much better practice to have a single exit point. I used to work on image processing scripts in MATLAB which could get very large. Multiple exit points could make the code extremely hard to follow. Switch statements were much more appropriate.
The best thing to do would be to learn as you go. If you are writing code for something try finding other people's code and seeing how they implement it. Decide which bits you like and which bits you don't.
If you feel like you need multiple exit points in a function, the function is too large and is doing too much.
I would recommend reading the chapter about functions in Robert C. Martin's book, Clean Code.
Essentially, you should try to write functions with 4 lines of code or less.
Some notes from Mike Long’s Blog:
The first rule of functions: they should be small
The second rule of functions: they should be smaller than that
Blocks within if statements, while statements, for loops, etc should be one line long
…and that line of code will usually be a function call
There should be no more than one or maybe two levels of indentation
Functions should do one thing
Function statements should all be at the same level of abstraction
A function should have no more than 3 arguments
Output arguments are a code smell
Passing a boolean flag into a function is truly awful. You are by definition doing two --things in the function.
Side effects are lies.

How many lines should a function have at most?

Is there a good coding technique that specifies how many lines a function should have ?
No. Lines of code is a pretty bad metric for just about anything. The exception is perhaps functions that have thousands and thousands of lines - you can be pretty sure those aren't well written.
There are however, good coding techniques that usually result in fewer lines of code per function. Things like DRY (Don't Repeat Yourself) and the Unix-philosophy ("Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." from Wikipedia). In this case replace "programs" with "functions".
I don't think it matters, who is to say that once a functions lengths passes a certain number of lines it breaks a rule.
In general just code clean functions easy to use and reuse.
A function should have a well defined purpose. That is, try to create functions which does a single thing, either by doing the thing itself or by delegating work to a number of other functions.
Most functional compilers are excellent at inlining. Thus there is no inherent price to pay for breaking up your code: The compiler usually does a good job at deciding if a function call should really be one or if it can just inline the code right away.
The size of the function is less relevant though most functions in FP tend to be small, precise and to the point.
There is a McCabe metric of Cyclomatic Complexity which you might read about at this Wikipedia article.
The metric measures how many tests and loops are present in a routine. A rule of thumb might be that under 10 is a manageable amount of complexity while over 11 becomes more fault prone.
I have seen horrendous code that had a Complexity metric above 50. (It was error-prone and difficult to understand or change.) Re-writing it and breaking it down into subroutines reduced the complexity to 8.
Note the Complexity metric is usually proportional to the lines of code. It would provide you a measure on complexity rather than lines of code.
When working in Forth (or playing in Factor) I tend to continually refactor until each function is a single line! In fact, if you browse through the Factor libraries you'll see that the majority of words are one-liners and almost nothing is more than a few lines. In a language with inner-functions and virtually zero cost for calls (that is, threaded code implicitly having no stack frames [only return pointer stack], or with aggressive inlining) there is no good reason not to refractor until each function is tiny.
From my experience a function with a lot of lines of code (more than a few pages) is a nightmare to maintain and test. But having said that I don't think there is a hard and fast rule for this.
I came across some VB.NET code at my previous company that one function of 13 pages, but my record is some VB6 code I have just picked up that is approx 40 pages! Imagine trying to work out which If statement an Else belongs to when they are pages apart on the screen.
The main argument against having functions that are "too long" is that subdividing the function into smaller functions that only do small parts of the entire job improves readability (by giving those small parts actual names, and helping the reader wrap his mind around smaller pieces of behavior, especially when line 1532 can change the value of a variable on line 45).
In a functional programming language, this point is moot:
You can subdivide a function into smaller functions that are defined within the larger function's body, and thus not reducing the length of the original function.
Functions are expected to be pure, so there's no actual risk of line X changing the value read on line Y : the value of the line Y variable can be traced back up the definition list quite easily, even in loops, conditionals or recursive functions.
So, I suspect the answer would be "no one really cares".
I think a long function is a red flag and deserves more scrutiny. If I came across a function that was more than a page or two long during a code review I would look for ways to break it down into smaller functions.
There are exceptions though. A long function that consists of mostly simple assignment statements, say for initialization, is probably best left intact.
My (admittedly crude) guideline is a screenful of code. I have seen code with functions going on for pages. This is emetic, to be charitable. Functions should have a single, focused purpose. If you area trying to do something complex, have a "captain" function call helpers.
Good modularization makes friends and influences people.
IMHO, the goal should be to minimize the amount of code that a programmer would have to analyze simultaneously to make sense of a program. In general, excessively-long methods will make code harder to digest because programmers will have to look at much of their code at once.
On the other hand, subdividing methods into smaller pieces will only be helpful if those smaller pieces can be analyzed separately from the code which calls them. Splitting a method into sub-methods which would only be meaningful in the context where they are called is apt to impair rather than improve legibility. Even if before splitting the method would have been over 250 lines, breaking it into ten pieces which don't make sense in isolation would simply increase the simultaneous-analysis requirement from 250 lines to 300+ (depending upon how many lines are added for method headers, the code that calls them, etc.) When deciding whether a method should be subdivided, it's far more important to consider whether the pieces make sense in isolation, than to consider whether the method is "too long". Some 20-lines routine might benefit from being split into two ten-line routines and a two-line routine that calls them, but some 250-line routines might benefit from being left exactly as they are.
Another point which needs to be considered, btw, is that in some cases the required behavior of a program may not be a good fit with the control structures available in the language it's written in. Most applications have large "don't-care" aspects of their behavior, and it's generally possible to assign behavior that will fit nicely with a language's available control structures, but sometimes behavioral requirements may be impossible to meet without awkward code. In some such cases, confining the awkwardness to a single method which is bloated, but which is structured around the behavioral requirements, may be better than scattering it among many smaller methods which have no clear relationship to the overall behavior.

Resources