I know there are some algorithms which take same running time for both recursive and iterative strategy. But I can't decide on that base.
Is that possible an algorithm with both recursive and iterative strategy will always take same running time?
Every recursive algorithm can be reduced to an iterative algorithm (with the same running time).
http://en.wikipedia.org/wiki/Recursion_(computer_science)#Recursion_versus_iteration
Yes, if an algorithm can be optimized to use tail recursion, then it can be converted to iterative without extra code, thus will have the same execution time.
It can be of the same time complexity but the constant overhead will probably differ (recursive will probably be more expensive). In addition, recursion adds an extra space complexity that's not present if iterative approach is used.
Related
Are there any Dynamic Programming Problems where the Bottom-Up solution has better time complexity than Top-Down (Memoization) solution?
Asymptotically both DP and Memoization give the same Time Complexity. However, at run-time, the DP solution outpaces Memoization technique by some constant factor.
This is because of the fact that in the case of DP we just need to look into the table for results of sub-problems while in case of Memoization we need to call recursion and then it returns the result whether by directly checking into hashtable/array(whatever you used) if its already computed or by computing it, which in turn consumes more CPU cycle.
In some cases, it might happen that our sub-problem space is very large but we do not require all sub-problems to get our answer then, Memoization could be lucrative because it solves only inevitable problems instead of all. But, this case is very seldom to occur because Many a time we optimise our DP code only in such a way that it doesn't iterate for all sub-problems instead only for required sub-problems.
Hence, generally speaking, Bottom-Up approach i.e., DP solution always runs faster than its corresponding Memoization solution. Note: I'm using the word runs faster instead of time complexity because time complexity is asymptotic which is same for both.
You might be able to make one up, but generally no.
When both kinds of solutions are available, the worst case time complexities are the same.
Bottom-up solutions are often faster in the worst case, in absolute terms (not asymptotic complexity) because memoization is a relatively expensive operation.
Top-down solutions are often faster in best or special cases, because they evaluate only the subproblems that are needed for each problem instance.
Is there any problem that can only be solved in recursion or iteration. If not, can all algorithms be represented in either form having the same complexity?
PS: I'm talking about the theoretical complexity(O, theta and omega) and not about the time taken when implemented in real world systems.
Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, while iteration can be replaced with tail recursion.[1]
So no, every problem that can be solved iterative can be solved with recursion and vice-versa. If you do 1:1 conversion, Big-O notation stays the same. It can, however, still be better to use an iterative algorithm over a recursive because you can do different things.
1: https://en.wikipedia.org/wiki/Recursion_(computer_science)#Recursion_versus_iteration
I am arguing with a fellow student, as he wants to convince me that there is a possibility that a divide-and-conquer algorithm can be implemented without the use of recursion.
Is this truly the case?
Any algorithm that can be implemented with recursion can also be implemented non-recursively.
Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used.
http://en.wikipedia.org/wiki/Recursion_%28computer_science%29#Recursion_versus_iteration
There's an important thing to understand: using or not recursion is an implementation decision. Recursion is not necessary to add computing power (at least not to a Turing complete language). Look up "Tail recursion" for an easy example of how to transform a recursive function to a non recursive one (in the case of a divide-et-impera algorithm you can remove at most 1 of the recursive calls with this method).
A function/algorithm that is computable with recursion is computable also without it. What matters is if the language with which you implement the algorithm is Turing complete or not.
Let's make an example. The mergesort algorithm can be implemented also in a non recursive way using a queue as an auxiliary data structure to keep track of the various merges.
I am more comfortable with implementing recursive methods over iterative ones. While studying for the exam, i implemented a recursive BFS(Breadth-First Search) using Queues, but while searching online for a recursive BFS that uses Queues, i kept on reading that BFS is an iterative algorithm not a recursive one. So is there any reason to choose one over the other?
Iterative is more efficient for the computer. Recursive is more efficient for the programmer and more elegant (perhaps).
The problem with recursive is each recursive call pushes the state/frame onto the call stack, which can quickly lead to resource exhaustion (stack overflow!) for deep recursion. But, the solutions are often easier to code and read.
Iterative performs better because it's all done in the local frame. However, converting recursive to iterative can reduce readabiity due to the introduction of variables to cater for the progression of the algorithm.
Chose whatever implementation is easiest to code and maintain. Only worry if you have a demonstrated problem.
Iterative and recursive both have same time complexity.difference is: recursive programs need more memory as each recursive call pushes state of the program into stack and stackoverflow may occur.but recursive code is easy to write and manage.You can reduce the space complexity of recursive program by using tail recursion.
Iterative implementations are usually faster. One example is fibonacci series. It's faster to implement it in a simple loop over a recursive solution.
More discussion here Recursion or Iteration?
This is a question from Introduction to Algorithms by Cormen et al, but this isn't a homework problem. Instead, it's self-study.
I have thought a lot and searched on Google. The answer that I can think of are:
Use another algorithm.
Give it best-case inputs
Use a better computer to run the algorithm
But I don't think these are correct. Changing the algorithm isn't the same as making an algorithm have better performance. Also using a better computer may increase the speed but the algorithm isn't better. This is a question in the beginning of the book so I think this is something simple that I am overlooking.
So how can we modify almost any algorithm to have a good best-case running time?
You can modify any algorithm to have a best case time complexity of O(n) by adding a special case, that if the input matches this special case - return a cached hard coded answer (or some other easily obtained answer).
For example, for any sort, you can make best case O(n) by checking if the array is already sorted - and if it is, return it as it is.
Note it does not impact average or worst cases (assuming they are not better then O(n)), and you basically improve the algorithm's best case time complexity.
Note: If the size of the input is bounded, the same optimization makes the best case O(1), because reading the input in this case is O(1).
If we could introduce an instruction for that very algorithm in the computation model of the system itself, we can just solve the problem in one instruction.
But as you might have already discovered that it is a highly unrealistic approach. Thus a generic method to modify any algorithm to have a best case running time is next to impossible. What we can do at max is to apply tweaks in the algorithm for general redundancies found in various problems.
Or you can go naive by taking the best case inputs. But again that isn't actually modifying the algorithm. In fact, introducing the algorithm in the computation system itself, instead of being highly unrealistic, isn't a modification in the algorithm either.
The ways we can modify the algorithm to have a best case running time are:
If the algorithm is at the point of its purpose/solution , For ex, for an increasing sort , it is already ascending order sorted and so on .
If we modify the algorithm such that we output and exit for its purpose only hence forcing multiple nested loops to be just one
We can sometimes use a randomized algorithm, that makes random choices, to allow a probabilistic analysis and thus improve the running time..
I think the only way for this problem is the input to the algorithm. Because the cases in time complexity analysis only depend on our input, how complex it is, how much times it tends to run the algorithm. on this analysis, we decide whether our case is best, average or worst.
So, our input will decide the running time for an algorithm in every case.
Or we can change our algorithm to improve for all cases(reducing the time complexity).
These are the ways we can achieve good best-case running time.
We can modify an algorithm for some special-case conditions, so if the input satisfies that condition, we can output the pre-computed answer. Generally, the best case running time is not a good measure for an algorithm. We need to know how the algorithm performes in worst case.
i just reached to this discussion while looking for an answer. what i think there is only one way to make any algorithm best case by having it a fixed input instead of varing input. if we have an fixed input always the cost and time complexity will always be O(1)