What's the usual practice to avoid stack overflow in recursive algorithms? [closed] - algorithm

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I used to think iterative algorithms be always superior to recursive ones due to the potential stack overflow issue, as long as one doesn't mind the extra efforts on coding. Therefore, if I were to build a util function to be used over and over again, I should always go with iterative. Is that logic correct? Or is there any standard trick to avoid stack overflow in recursive even for very large N? Still assume the data structure itself is not too big to fit in RAM.

Basically you're right for common compilers iteration is superior in both memory consumption and performance, but do not forget there are languages usually focused on functional programing and or AI that are optimized for recursion and with them the recursion is superior instead...
Anyway if I can I always use iteration for the reasons you mentioned. However recursive approach is often more simplistic than iterative one and the amount of coding required to convert to iteration is too much... In such cases you can do these:
limit heap/stack trashing
simply get rid of as much operands, return values and local variables as you can as each recursion will make a copy/instance of it...
you can use static variables for the locals or even global ones for the operands but beware not to break the functionality by using this.
You would not believe how many times I saw passing array as operand into recursion ...
limit recursion
if you got multi split recursion (one recursive layer have more than 1 recursive call) then you can have some internal global counter how many recursions are currently active ...
if you hit some threshold value do not use recursive calls anymore ... instead schedule them into some global array/list IIRC called priority queue and once all the pended recursions stop process this queue until its empty. However this approach is not always applicable. Nice examples for this approach are: Flood Fill, grid A* path finding,...
increase heap/stack size
executables have a table that tells the OS how much memory to allocate for its parts... so just find the settings in your compiler/linker/IDE and set the value to reasonable size.
I am sure there are more techniques but these are those I use ...

Related

What's a good selective pressure to use in tournament selection in a genetic algorithm? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the optimal and usual value of selective pressure in tournament selection? What percent of the best members of the current generation should propagate to the next generation?
Unfortunately, there isn't a great answer to this question. The optimal parameters will vary from problem to problem, and people use a wide range of them. Selecting the right tournament selection parameters is currently more of an art than a science. Stronger selective pressure (a larger tournament) will generally result in the population converging on a solution faster, at the cost of that solution potentially not being as good. This is called the exploration vs. exploitation tradeoff, and it underlies most algorithms for searching a large space of possible solutions - you're not going to get away from it.
I know that's not very helpful, though - you want a starting place, and that's completely reasonable. So here's the best one I know of (and I know a number of others who use it as a go-to default tournament configuration as well): a tournament size of two. Basically, this means you just keep picking random pairs of solutions, choosing the best one, and sending it to the next generation (with mutation and crossover as desired), until the next generation is the desired size. This has the nice property that any member of the population besides the absolute worst has a chance of getting to the next generation, but better ones have a better chance.

How to implement a general-purpose list type? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Suppose you are developing a language that is intended to be used for scripting, prototyping, as a macro language for application automation, or as an interactive calculator. It's dynamically typed, memory management is implicit, based on a garbage collection. In most cases, you do not expect users to use highly optimized algorithms or hand-picked and fine-tuned data structures. You want to provide a general-purpose list type that would have a decent performance on average. It must support all kinds of operations: iteration, random access by index, prepending and appending elements, insertion, deletion, mapping, filtering, membership testing, concatenation, splitting, reversing, sorting, cloning, extracting segments. It could be used both with small and large number of elements (but you can assume that it fits into physical memory). It's intended only for single-thread access and you need not care about thread safety. You expect users to use this general-purpose list type, no matter what is their scenario or usage pattern. Some users might want to use it as a sparse array, where most elements have some default value (e.g. 0), and only few elements have non-default values.
What implementation would you choose?
We assume that you can afford to invest significant development effort, so the solution need not be necessarily simple. For example, you could implement different ways of internal organization of data, and switch between them depending on number of elements or usage patterns. High performance is a more important goal than reducing memory consumption, so you can afford some memory overhead if it wins you performance.

implementation of recursion and loops at different levels [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read posts where people say certain compilers will implement recursion as loops but the hardware implements loops as recursion and vice versa. If I have a recursive function and an iterative function in my program, can someone please explain how the compiler and hardware are going to interpret each one? Please also address any performance benefits of one over the other if the choice of implementation does not clearly favor one method like using recursion for mergesort.
Ok, here is a brief answer:
1)A compiler can optimize tail recursive calls. But it is usually not a loop, but rather a stack frame reuse. However, I have never heard of any compiler that converts a loop into recursion(and I do not see any point of doing so: it would use additional stack space, likely to work slower and can lead to the change of semantics(stackoverflow instead of an infinite loop)).
2)I would say that it is not correct to speak about hardware implementing loops, because hardware itself does not implement loops. It has instructions(like conditional jumps, arithmetical operations and so on) which are used to implement loops.

Which function in pascal is the fastest one, while, for or repeat? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I solved one problem in pascal using mostly for function and I didn't meet the time limit, and the problems is correctly solver in the best way. So is any of these other functions faster maybe or I made a mistake and solved it wrongly?
Short answer: Try out! just make 3 versions of your solution and test each against a good mass data and record how much times it takes. If you are still not getting the time limit you want try a faster PC or review your solution.
while and repeat are not functions, but indicate looping control structures intrinsic to the programming language.
Neither is faster, neither is slower. Both are faster, both are slower. A general answer is not possible, and the more work you do inside the loop, the less relevant any difference, if any, becomes.
If you didn't solve your exercise, then the problem is not the choice of the loop. It could be the algorithm you chose, it could be a mistake you made, it could be the testing machine didn't have enough processing time left to your program to be in time.
Assuming a sensible compiler, there should be no performance difference between them.
Even if there were a difference, it would likely be negligible -- unnoticeable by humans, and easily within experimental error for computed results.
Since this is a school assignment, I'd suggest reviewing the material taught in the last week; probably something there will hint at a better way to solve the problem.
Originally, for was considered faster, because it didn't allow changing the loop variable , which enabled certain kind of optimizations, specially if the value of the loop variable wasn't used.
Nowadays, with less memory and time limits on optimization, the various forms are easily transformed into each other, and the whole discussion becomes academic.
Note that most modern compilers (after Delphi) add another check to the FOR statement, to check that the higher bound is greater or equal than the lower bound. If you have to answer such question for homework, check your compiler very thoroughly.

Challenging Problems Involving Stacks (Data Structures) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Does anyone know where I can get lots of challenging programming problems that involve the use of stacks? I'll like to challenge myself with super hard problems.
My Favorite
Convert Prefix expression to Postfix using stack
Implement a stack with O(1) for access to max element
Implement tower of hanoi without
recursion. :)
Take any damn recursive program , which is recursive by intuition, and try to implement it iteratively. Every recursive has a iterative solution.
That should keep you busy for a while. :)
BTW,
Why dont you buy/borrow "Data Structures Using C and C++" by langsam/tennenbaum ?
YOu will get sufficient problem for a week. :)
A Google search for stack data structure problems turns up lots of interesting stuff. Here are a few interesting ones (from easier to harder, in my judgment):
Check for balanced parentheses, braces, and other paired delimiters in text.
Write a postfix calculator. (So 1 3 2 4 + * - should calculate 1 - (3 * (2+4)).)
Use stacks to solve the "N queens" problem. (Place N queens on an N x N chess board so that no two queens are on the same row, column, or diagonal.)
Cracking the Coding Interview, Fourth Edition: 150 Programming Interview Questions and Solutions:
http://www.amazon.com/Cracking-Coding-Interview-Fourth-Programming/dp/145157827X
Chapter 3 is dedicated to stacks and queues questions. Here is a sample question:
How would you design a stack which, in addition to push and pop, also has a function min which returns the minimum element? Push, pop and min should all operate in O(1) time.
You could learn a stack-based language, like Forth, or a stack virtual machine, like MS's CIL VM. Either one will force you to reconsider how you implement anything you write in it to make use of stacks.
Go to the library and look at textbooks on data structures. They should all be shelved together, so you can always just find the call number for your current textbook and the others will be right nearby.
I'd expect most would have chapters specifically on stacks, with end-of-chapter problems. There will be a lot of overlap, but you should be able to get a good selection of problems of varying difficulty, along with a range of different explanations for standard problems.

Resources