implementation of recursion and loops at different levels [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read posts where people say certain compilers will implement recursion as loops but the hardware implements loops as recursion and vice versa. If I have a recursive function and an iterative function in my program, can someone please explain how the compiler and hardware are going to interpret each one? Please also address any performance benefits of one over the other if the choice of implementation does not clearly favor one method like using recursion for mergesort.

Ok, here is a brief answer:
1)A compiler can optimize tail recursive calls. But it is usually not a loop, but rather a stack frame reuse. However, I have never heard of any compiler that converts a loop into recursion(and I do not see any point of doing so: it would use additional stack space, likely to work slower and can lead to the change of semantics(stackoverflow instead of an infinite loop)).
2)I would say that it is not correct to speak about hardware implementing loops, because hardware itself does not implement loops. It has instructions(like conditional jumps, arithmetical operations and so on) which are used to implement loops.

Related

What's the usual practice to avoid stack overflow in recursive algorithms? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I used to think iterative algorithms be always superior to recursive ones due to the potential stack overflow issue, as long as one doesn't mind the extra efforts on coding. Therefore, if I were to build a util function to be used over and over again, I should always go with iterative. Is that logic correct? Or is there any standard trick to avoid stack overflow in recursive even for very large N? Still assume the data structure itself is not too big to fit in RAM.
Basically you're right for common compilers iteration is superior in both memory consumption and performance, but do not forget there are languages usually focused on functional programing and or AI that are optimized for recursion and with them the recursion is superior instead...
Anyway if I can I always use iteration for the reasons you mentioned. However recursive approach is often more simplistic than iterative one and the amount of coding required to convert to iteration is too much... In such cases you can do these:
limit heap/stack trashing
simply get rid of as much operands, return values and local variables as you can as each recursion will make a copy/instance of it...
you can use static variables for the locals or even global ones for the operands but beware not to break the functionality by using this.
You would not believe how many times I saw passing array as operand into recursion ...
limit recursion
if you got multi split recursion (one recursive layer have more than 1 recursive call) then you can have some internal global counter how many recursions are currently active ...
if you hit some threshold value do not use recursive calls anymore ... instead schedule them into some global array/list IIRC called priority queue and once all the pended recursions stop process this queue until its empty. However this approach is not always applicable. Nice examples for this approach are: Flood Fill, grid A* path finding,...
increase heap/stack size
executables have a table that tells the OS how much memory to allocate for its parts... so just find the settings in your compiler/linker/IDE and set the value to reasonable size.
I am sure there are more techniques but these are those I use ...

Do any computer languages not use a stack? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Do any computer languages not use a stack data structure to keep track of execution progress?
Or is the use of this data structure an emergent requirement stemming from something inherent to most computer languages or turing machines?
With a traditional "C-style" stack, certain language features are difficult or impossible to implement. For example, closures can't easily be implemented with a traditional stack because closures require a pointer to an old activation record to work correctly and that memory is automatically reclaimed in a C-style stack. As another example, generators and coroutines need their own memory to store local variables and relative offset information and therefore can't easily be implemented if you use a standard stack implementation.
Hope this helps!

go-lang: lack of contains method design-justification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
while browsing for a contains method, I came across the following Q&A
contains-method-for-a-slice
It is said time and again in this Q&A that the method is really trivial to implement. What I don't understand is, if it were so easy to implement, and seeing how DRY is a popular software principle && and most modern languages implement said method , what sort of design reasoning could be involved behind the exclusion of such a simple method?
The triviality of the implementation depends on the scope of the implementation. It is trivial to implement when you know how to compare each value. Application code usually knows how to compare the types used in that application. But it is not trivial to implement in the general case for arbitrary types, and that is the situation for the language and standard library.
Figuring out if a slice contains a certain object is an O(n) operation where n is the length of the slice. This would not change if the language provided a function to do this. If your code relies on frequently checking if a slice contains a certain value, you should reevaluate your choice of data structures; a map is usually better in these kind of cases. Why should the standard library include functions that encourage you to use the wrong data structure for the task you have?

Advanced Rudimentary Computing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lets say that my definition of 'rudimentary programming' refers to the fundamental tools employed for a computer to perform a task.
Considering programming rudiments, the learning spectrum usually looks something like this:
Variables, data types and variable memory
Arrays/Lists and their manipulation
Looping and conditionals
Functions
Classes
Multi threading/processing
Streams (hard-disk and web)
My question is, have I missed any of the major rudiments? Is there a 'next' to the spectrum that still eludes me?
I think you missed the most important one: algorithms. Understanding the complexity, know the situation to use them, why use them and more important, how to implement them.
I'm pretty sure that you already know a lot about algorithms but if you think that your tool-knowledge (aka the programming languages) are good enough, you should start focus, more, on the algorithms.
A great book to start is: Introduction to Algorithms, from Thomas H. Cormen

what actually is a loop? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I know that there are a couple ways to loop in various languages - a "for" loop, a "while" loop, a "do-while" loop, and just "loop" in Ruby for example. And I know there are various functions in every language that are pre-written in that language - for example the .each function in Ruby (which I think is based on a "for" loop and written entirely in Ruby and is replicable just using the language).
But what is the logic behind loops? Are they programmed from control flow statements in Assembly or even binary? And in fact, now that i think about it, what is the origin of all programming structures in general (such as variable name/value associations, arrays, hashes, etc - sorry if my terminology is wrong). Can anyone recommend sources to read more on this?
Loops in general are jumps in the program flow. When you compile your code i.e. in C, it gets "converted" to assembly, where you can see that loop structure:
So you start at a certain adress and do some stuff (i.e. ADD) which basically is everything inside your loop. At the very end then is a jump instruction back to the adress where you started. Breaking is is then i.e. made through a conditional jump inside your loop and thus you don't get to the jump instruction again and therefore do not loop again.

Resources