I've been given a question by my professor in Data Structures and Algorithms, and I'm having a hard time understanding how I should be answering something like this. As I've already moved on to analyzing algorithms, to be asked a question outside of the context of a pseudo-code algorithm is really messing me up.
In this video, this individual goes over the concept of Egyptian Multiplication in which you multiply numbers from one to n on the left hand side until you have enough numbers to sum to your original number. On the other side, you double your original number equally in iterations to the left hand side until you, again, have the numbers necessary on the left hand side to sum to your original left-hand number. You then take the key-value pairs only where the keys add to the original number, and sum their values to find the answer to the original question: 9 x 13.
Since I am not seeing anything at face-value, I thought I'd try to code the first part, finding the key-value pairs to have something to look at.
In python, I came up with something like the following for finding the key-vaule pairs:
def egyptian_multiplication(a, b):
result = 0
right = 1
left = b
pairs = {}
pairs[str(right)] = left
while right < a:
right = right * 2
left = left * 2
pairs[str(right)] = left
print(pairs)
egyptian_multiplication(9, 13)
OUTPUT:
{'1': 13, '2': 26, '4': 52, '8': 104, '16': 208}
Okay, so from what I can see, we're at O(n) complexity here. But now for the second part.
From a literal perspective, you need to traverse pairs.keys() as integers to find which combination adds to a, and subsequently add those associated values together. I have seen this solved in one place online not using packages that ended up being solved recursively.
The question, however, isn't to write it out in psedo-code and analyze it that way. The question is simply what is the worse case running time for Egyptian Multiplication? What are the elementary steps to solve this problem?
I'm completely thrown off by this because I'm not looking at code line by line. I need to define the elementary steps required to solve the algorithm, and then provide the worst case running time. Maybe I'm thinking too hard about this when I'm supposed to just see something at face value, but I'm not seeing it and I'm really trying my darnedest here to put this in terms that I can understand.
I have found this stack overflow question that claims they're doing Egyptian Multiplication, but it is actually detailed in the video as Russian Multiplication, so I do not believe that solves my problem.
If anyone can help me, I really am trying to understand this on my own.
I have some confusion in:
for i = 1 to n
does this pseudocode code means for(i=1; i<n; i++) or for(i=1; i<=n; i++) ?
And if one from them, then what will be the pseudocode for another one?
It should be understood to include an iteration where i takes the value of n.
However, there are no strict rules for pseudo code, and so in case there is doubt about an actual piece of pseudo code, then this means that code didn't achieve what it was supposed to do: unambiguously describe an algorithm without any programming language in mind.
Here are some of the arguments why it should be understood to include n:
When in plain English we tell someone to count from 1 "to 100", we mean that they should include 100 at the end. This is probably the strongest argument.
I know of no programming language that uses the to keyword in a for loop syntax, where the value that follows to is not to be included in the iterations. On the other hand, BASIC like languages (such as vba) have that syntax, and the "to" value will be used for the final iteration.
On Wikipedia some examples of pseudo code loops are given, and where the to keyword is used, the final iteration is using that value.
If the intention of this loop is to visit each value in an array of n elements (a very common case), then there is no ambiguity: for i = 1 to n tells us that the array's first element has position 1, and hence its last position is position n. It is not uncommon to use such 1-based indexing of arrays in pseudo code, as this is more intutive to non-programmers. If however in such array context you see for i = 0 to n in pseudo code, some alarms should go off. That definitely would come across ambiguous.
It is highly contextual and depends on the person who wrote it. (It also kind of depends on the programming language that the pseudocode is aimed to be converted to.)
If it were in an interview setting, it might be a good idea to ask if this is "inclusive of n or not". Because it could be either case.
(I know this answer might not be that helpful. Sorry about that, but it doesn't seem that this has a definitive answer.)
Is it correct to say that everywhere recursion is used a for loop could be used? And if recursion is usually slower what is the technical reason for ever using it over for loop iteration?
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Recursion is usually much slower because all function calls must be stored in a stack to allow the return back to the caller functions. In many cases, memory has to be allocated and copied to implement scope isolation.
Some optimizations, like tail call optimization, make recursions faster but aren't always possible, and aren't implemented in all languages.
The main reasons to use recursion are
that it's more intuitive in many cases when it mimics our approach of the problem
that some data structures like trees are easier to explore using recursion (or would need stacks in any case)
Of course every recursion can be modeled as a kind of loop : that's what the CPU will ultimately do. And the recursion itself, more directly, means putting the function calls and scopes in a stack. But changing your recursive algorithm to a looping one might need a lot of work and make your code less maintainable : as for every optimization, it should only be attempted when some profiling or evidence showed it to be necessary.
Is it correct to say that everywhere recursion is used a for loop could be used?
Yes, because recursion in most CPUs is modeled with loops and a stack data structure.
And if recursion is usually slower what is the technical reason for using it?
It is not "usually slower": it's recursion that is applied incorrectly that's slower. On top of that, modern compilers are good at converting some recursions to loops without even asking.
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Write iterative programs for algorithms best understood when explained iteratively; write recursive programs for algorithms best explained recursively.
For example, searching binary trees, running quicksort, and parsing expressions in many programming languages is often explained recursively. These are best coded recursively as well. On the other hand, computing factorials and calculating Fibonacci numbers are much easier to explain in terms of iterations. Using recursion for them is like swatting flies with a sledgehammer: it is not a good idea, even when the sledgehammer does a really good job at it+.
+ I borrowed the sledgehammer analogy from Dijkstra's "Discipline of Programming".
Question :
And if recursion is usually slower what is the technical reason for ever using it over for loop iteration?
Answer :
Because in some algorithms are hard to solve it iteratively. Try to solve depth-first search in both recursively and iteratively. You will get the idea that it is plain hard to solve DFS with iteration.
Another good thing to try out : Try to write Merge sort iteratively. It will take you quite some time.
Question :
Is it correct to say that everywhere recursion is used a for loop could be used?
Answer :
Yes. This thread has a very good answer for this.
Question :
And if it is always possible to convert an recursion into a for loop is there a rule of thumb way to do it?
Answer :
Trust me. Try to write your own version to solve depth-first search iteratively. You will notice that some problems are easier to solve it recursively.
Hint : Recursion is good when you are solving a problem that can be solved by divide and conquer technique.
Besides being slower, recursion can also result in stack overflow errors depending on how deep it goes.
To write an equivalent method using iteration, we must explicitly use a stack. The fact that the iterative version requires a stack for its solution indicates that the problem is difficult enough that it can benefit from recursion. As a general rule, recursion is most suitable for problems that cannot be solved with a fixed amount of memory and consequently require a stack when solved iteratively.
Having said that, recursion and iteration can show the same outcome while they follow different pattern.To decide which method works better is case by case and best practice is to choose based on the pattern that problem follows.
For example, to find the nth triangular number of
Triangular sequence: 1 3 6 10 15 …
A program that uses an iterative algorithm to find the n th triangular number:
Using an iterative algorithm:
//Triangular.java
import java.util.*;
class Triangular {
public static int iterativeTriangular(int n) {
int sum = 0;
for (int i = 1; i <= n; i ++)
sum += i;
return sum;
}
public static void main(String args[]) {
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter a number: ");
int n = stdin.nextInt();
System.out.println("The " + n + "-th triangular number is: " +
iterativeTriangular(n));
}
}//enter code here
Using a recursive algorithm:
//Triangular.java
import java.util.*;
class Triangular {
public static int recursiveTriangular(int n) {
if (n == 1)
return 1;
return recursiveTriangular(n-1) + n;
}
public static void main(String args[]) {
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter a number: ");
int n = stdin.nextInt();
System.out.println("The " + n + "-th triangular number is: " +
recursiveTriangular(n));
}
}
Yes, as said by Thanakron Tandavas,
Recursion is good when you are solving a problem that can be solved by divide and conquer technique.
For example: Towers of Hanoi
N rings in increasing size
3 poles
Rings start stacked on pole 1. Goal is to move rings so
that they are stacked on pole 3 ...But
Can only move one ring at a time.
Can’t put larger ring on top of smaller.
Iterative solution is “powerful yet ugly”; recursive solution is “elegant”.
I seem to remember my computer science professor say back in the day that all problems that have recursive solutions also have iterative solutions. He says that a recursive solution is usually slower, but they are frequently used when they are easier to reason about and code than iterative solutions.
However, in the case of more advanced recursive solutions, I don't believe that it will always be able to implement them using a simple for loop.
Most of the answers seem to assume that iterative = for loop. If your for loop is unrestricted (a la C, you can do whatever you want with your loop counter), then that is correct. If it's a real for loop (say as in Python or most functional languages where you cannot manually modify the loop counter), then it is not correct.
All (computable) functions can be implemented both recursively and using while loops (or conditional jumps, which are basically the same thing). If you truly restrict yourself to for loops, you will only get a subset of those functions (the primitive recursive ones, if your elementary operations are reasonable). Granted, it's a pretty large subset which happens to contain every single function you're likely to encouter in practice.
What is much more important is that a lot of functions are very easy to implement recursively and awfully hard to implement iteratively (manually managing your call stack does not count).
recursion + memorization could lead to a more efficient solution compare with a pure iterative approach, e.g. check this:
http://jsperf.com/fibonacci-memoized-vs-iterative-for-large-n
Short answer: the trade off is recursion is faster and for loops take up less memory in almost all cases. However there are usually ways to change the for loop or recursion to make it run faster
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What algorithm taught you the most about programming or a specific language feature?
We have all had those moments where all of a sudden we know, just know, we have learned an important lesson for the future based on finally understanding an algorithm written by a programmer a couple of steps up the evolutionary ladder. Whose ideas and code had the magic touch on you?
General algorithms:
Quicksort (and it's average complexity analysis), shows that randomizing your input can be a good thing!;
balanced trees (AVL trees for example), a neat way to balance search/insertion costs;
Dijkstra and Ford-Fulkerson algorithms on graphs (I like the fact that the second one has many applications);
the LZ* family of compression algorithms (LZW for example), data compression sounded kind of magic to me until I discovered it (a long time ago :) );
the FFT, ubiquitous (re-used in so many other algorithms);
the simplex algorithm, ubiquitous as well.
Numerical related:
Euclid's algorithm to compute the gcd of two integers: one of the first algorithms, simple and elegant, powerful, has lots of generalizations;
fast multiplication of integers (Cooley-Tukey for example);
Newton iterations to invert / find a root, a very powerful meta-algorithm.
Number theory-related:
AGM-related algorithms (examples): leads to very simple and elegant algorithms to compute pi (and much more!), though the theory is quite profound (Gauss introduced elliptic functions and modular forms from it, so you can say that it gave birth to algebraic geometry...);
the number field sieve (for integer factorization): very complicated, but quite a nice theoretical result (this also goes for the AKS algorithm, which proved that PRIMES is in P).
I also enjoyed studying quantum computing (Shor and Deutsch-Josza algorithms for example): this teaches you to think out of the box.
As you can see, I'm a bit biased towards maths-oriented algorithms :)
"To iterate is human, to recurse divine" - quoted in 1989 at college.
P.S. Posted by Woodgnome while waiting for invite to join
Floyd-Warshall all-pairs shortest paths algorithm
procedure FloydWarshall ()
for k := 1 to n
for i := 1 to n
for j := 1 to n
path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );
Here's why it's cool: when you first learn about the shortest-path problem in your graph theory course, you probably start with Dijkstra's algorithm that solves single-source shortest path. It's quite complicated at first, but then you get over it, and you fully understood it.
Then the teacher says "Now we want to solve the same problem but for ALL sources". You think to yourself, "Oh god, this is going to be a much harder problem! It's going to be at least N times more complicated than Dijkstra's algorithm!!!".
Then the teacher gives you Floyd-Warshall. And your mind explodes. Then you start to tear up at how beautifully simple the algorithm is. It's just a triply-nested loop. It only uses a simple array for its data structure.
The most eye-opening part for me is the following realization: say you have a solution for problem A. Then you have a bigger "superproblem" B which contains problem A. The solution to problem B may in fact be simpler than the solution to problem A.
This one might sound trivial but it was a revelation for me at the time.
I was in my very first programming class(VB6) and the Prof had just taught us about random numbers and he gave the following instructions: "Create a virtual lottery machine. Imagine a glass ball full of 100 ping pong balls marked 0 to 99. Pick them randomly and display their number until they have all been selected, no duplicates."
Everyone else wrote their program like this: Pick a ball, put its number into an "already selected list" and then pick another ball. Check to see if its already selected, if so pick another ball, if not put its number on the "already selected list" etc....
Of course by the end they were making hundreds of comparisons to find the few balls that had not already been picked. It was like throwing the balls back into the jar after selecting them. My revelation was to throw balls away after picking.
I know this sounds mind-numbingly obvious but this was the moment that the "programming switch" got flipped in my head. This was the moment that programming went from trying to learn a strange foreign language to trying to figure out an enjoyable puzzle. And once I made that mental connection between programming and fun there was really no stopping me.
Huffman coding would be mine, I had originally made my own dumb version by minimizing the number of bits to encode text from 8 down to less, but had not thought about variable number of bits depending on frequency. Then I found the huffman coding described in an article in a magazine and it opened up lots of new possibilities.
Quicksort. It showed me that recursion can be powerful and useful.
Bresenham's line drawing algorithm got me interested in realtime graphics rendering. This can be used to render filled polygons, like triangles, for things like 3D model rendering.
Recursive Descent Parsing - I remember being very impressed how such simple code could do something so seemingly complex.
Quicksort in Haskell:
qsort [] = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
Although I couldn'd write Haskell at the time, I did understand this code and with it recursion and the quicksort algorithm. It just made click and there it was...
The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.
To elaborate- The "fib(10) = fib(9) + fib(8)" approach means that fib(9) will be evaluated to fib(8) + fib(7). So evaluation of fib(8) (and therefor fib7, fib6) will all be evaluated twice.
The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.
I tend to strive for simple, elegant code when I'm programming, but this is the algorithm that helped me realize that this isn't the end-all-be-all for writing good software, and that ultimately the end users don't care how your code looks.
For some reason I like the Schwartzian transform
#sorted = map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, foo($_)] }
#unsorted;
Where foo($) represents a compute-intensive expression that takes $ (each item of the list in turn) and produces the corresponding value that is to be compared in its sake.
Minimax taught me that chess programs aren't smart, they can just think more moves ahead than you can.
I don't know if this qualifies as an algorithm, or just a classic hack. In either case, it helped to get me to start thinking outside the box.
Swap 2 integers without using an intermediate variable (in C++)
void InPlaceSwap (int& a, int &b) {
a ^= b;
b ^= a;
a ^= b;
}
Quicksort: Until I got to college, I had never questioned whether brute force Bubble Sort was the most efficient way to sort. It just seemed intuitively obvious. But being exposed to non-obvious solutions like Quicksort taught me to look past the obvious solutions to see if something better is available.
For me it's the weak-heapsort algorithm because it shows (1) how much a wise chosen data structure (and the algorithms working on it) can influence the performance and (2) that fascinating things can be discovered even in old, well-known things. (weak-heapsort is the best variant of all heap sorts, which was proven eight years later.)
This is a slow one :)
I learned lots about both C and computers in general by understanding Duffs Device and XOR swaps
EDIT:
#Jason Z, that's my XOR swap :) cool isn't it.
For some reason Bubble Sort has always stood out to me. Not because it's elegant or good just because it had/has a goofy name I suppose.
The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.
The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.
You know that fibonacci has a closed form solution that allows direct computation of the result in a fixed number of steps, right? Namely, (phin - (1 - phi)n) / sqrt(5). It always strikes me as somewhat remarkable that this should yield an integer, but it does.
phi is the golden ratio, of course; (1 + sqrt(5)) / 2.
I don't have a favourite -- there are so many beautiful ones to pick from -- but one I've always found intriguing is the Bailey–Borwein–Plouffe (BBP) formula, which enables you to calculate an arbitrary digit of pi without knowledge about the preceding digits.
RSA introduced me to the world of modular arithmetic, which can be used to solve a surprising number of interesting problems!
Hasn't taught me much, but the Johnson–Trotter Algorithm never fails to blow my mind.
Binary decision diagrams, though formally not an algorithm but a datastructure, lead to elegant and minimal solutions for various sorts of (boolean) logic problems. They were invented and developped to minimise the gate count in chip-design, and can be viewed as one of the fundaments of the silicon revolution. The resulting algorithms are amazingly simple.
What they taught me:
a compact representation of any problem is important; small is beautiful
a small set of constraints/reductions applied recursively can be used to accomplish this
for problems with symmetries, tranformation to a canonical form should be the first step to consider
not every piece of literature is read. Knuth found out about BDD's several years after their invention/introduction. (and spent almost a year investigating them)
For me, the simple swap in Kelly & Pohl's A Book on C to demonstrate call-by-reference flipped me out when I first saw it. I looked at that, and pointers snapped into place. Verbatim. . .
void swap(int *p, int *q)
{
int temp;
temp = *p;
*p = *q;
*q = temp;
}
The Towers of Hanoi algorithm is one of the most beautiful algorithms. It shows how you can use recursion to solve a problem in a much more elegant fashion than the iterative method.
Alternatively, the recursion algorithm for Fibonacci series and calculating powers of a number demonstrate the reverse situation of recursive algorithm being used for the sake of recursion instead of providing good value.
An algorithm that generates a list of primes by comparing each number to the current list of primes, adding it if it's not found, and returning the list of primes at the end. Mind-bending in several ways, not the least of which being the idea of using the partially-completed output as the primary search criteria.
Storing two pointers in a single word for a doubly linked list tought me the lesson that you can do very bad things in C indeed (with which a conservative GC will have lots of trouble).
The most proud I've been of a solution was writing something very similar to the DisplayTag package. It taught me a lot about code design, maintainability, and reuse. I wrote it well before DisplayTag, and it was sunk into an NDA agreement, so I couldn't open source it, but I can still speak gushingly about that one in job interviews.
Map/Reduce. Two simple concepts that fit together to make a load of data-processing tasks easier to parallelize.
Oh... and it's only the basis of massively-parallel indexing:
http://labs.google.com/papers/mapreduce.html
Not my favorite, but the Miller Rabin Algorithm for testing primality showed me that being right almost all the time, is good enough almost all the time. (i.e. Don't mistrust a probabilistic algorithm just because it has a probability of being wrong.)
#Krishna Kumar
The bitwise solution is even more fun than the recursive solution.