Information about the complexity of recursive algorithms - algorithm

does anyone know about some good sources about counting complexity of recursive algorithms?
somehow recurrent equation isn't really popular title for web page or what, I just couldn't google out anything reasonable...

This is a complex topic that is not so well-documented on free licterature on internet.
I just did a similar exam and I can point you to the handbook written by my teacher: PDF Handbook
This handbook covers mostly another tool called generating functions that are useful to solve any kind of recurrence without bothering too much on the kind of recurrence.
There is a good book about Analysis of Algorithms that is An introduction to the Analysis of Algorithms (amazon link) by Sedgewick and Philippe Flajolet but you won't find it online (I had to scan parts of it).
By the way I've searched over internet a lot but I haven't found any complete reference with examples useful to learn the techniques.

I think you would have had more luck with recurrence equation.

You can also check out the Master theorem.
In the analysis of algorithms, the
master theorem, which is a specific
case of the Akra-Bazzi theorem,
provides a cookbook solution in
asymptotic terms for recurrence
relations of types that occur in
practice. It was popularized by the
canonical algorithms textbook
Introduction to Algorithms by Cormen,
Leiserson, Rivest, and Stein, which
introduces and proves it in sections
4.3 and 4.4, respectively. Nevertheless, not all recurrence
relations can be solved with the use
of the master theorem.

Related

Extended Euclidean Algorithm runs in time O(log(m)^2)

I'm interested in justification of the following line from the wikipedia article:
"This algorithm [extended euclidean algorithm] runs in time O(log(m)^2), assuming |a| < m, and is generally more efficient than exponentiation."
http://en.wikipedia.org/wiki/Modular_multiplicative_inverse
Why is this so? Can anyone explain this to me? I understand completely the algorithm and all the maths, it is just that i do not see how to determine the complexity of such algorithms. Any more general hints?
Also, additionally: Is log meant to be the natural logarithm (ln) or the one with base 2?
The popular Introduction to algorithms book (http://mitpress.mit.edu/books/introduction-algorithms) has got a whole chapter on proving algorithms complexity (however there's much more to the topic than in this book). You can read it if your generally interested in this matter.
You might also try to follow this paper's references: http://itee.uq.edu.au/~havas/cats03.pdf

Minimal addition-chain exponentiation

I know it has been proven NP-complete, and that's ok. I'm currently solving it with branch and bound where I set the initial upper limit at the number of multiplications it would take the normal binary square/multiply algorithm, and it does give the right answers, but I'm not satisfied with the running time (it can take several seconds for numbers around 200). This being an NP-complete problem, I'm not expecting anything spectacular; but there are often tricks to get the Actual Time under control somewhat.
Are there faster ways to do this in practice? If so, what are they?
This looks like section 4.6.3 "Evaluation of Powers" in Knuth Vol 2 Seminumerical Algorithms. This goes into considerable detail to give various approaches, which look much quicker than branch and bound but do not all provide the absolutely best solution.
Knuth states in the discussion after Theorem F that he uses backtrack search to prove that l(191) = 11, so I doubt if you will find a short-cut answer for this. He defers explanation of the backtrack search to section 7.2.2, which is I think still unpublished, although there are traces of work on this at http://www-cs-faculty.stanford.edu/~uno/programs.html.
Metaheuristics algorithms will scale far better. They include Tabu search, Genetic algorithms, Simulated Annealing, ...
There's a couple of free books and free software out there.
I'm late to the party but in Handbook of Elliptic and Hyperelliptic Curve Cryptography there is a chapter "9.2 Fixed exponent" which also discusses various kinds addition chains.

To compute the worst-case running time function of an algorithm what are the steps to be followed? Algorithms

To compute the worst-case running time function of an algorithm what are the steps to be followed? Please some one guide me in that. I think these steps includes some mathematical proof's. If I am correct In which parts of mathematics areas I should be strong? (I guess mathematical Induction,functions, sets are enough)
Thanks
You can find good answers in the following books:
http://www.algorist.com/
"Art of computer programming", Knuth
I think mostly this is: good understanding of the algorithm, combinatorics and computational complexity theory - http://en.wikipedia.org/wiki/Computational_complexity_theory
To learn about computational complexity you need to know Calculus, Combinatorics, Set Theory, Summations amongst other maths topics.
A good book; though fairly theoretical is Introduction To Algorithms by Cormen et. al.

Solutions to problems using dynamic programming or greedy methods?

What properties should the problem have so that I can decide which method to use dynamic programming or greedy method?
Dynamic programming problems exhibit optimal substructure. This means that the solution to the problem can be expressed as a function of solutions to subproblems that are strictly smaller.
One example of such a problem is matrix chain multiplication.
Greedy algorithms can be used only when a locally optimal choice leads to a totally optimal solution. This can be harder to see right away, but generally easier to implement because you only have one thing to consider (the greedy choice) instead of multiple (the solutions to all smaller subproblems).
One famous greedy algorithm is Kruskal's algorithm for finding a minimum spanning tree.
The second edition of Cormen, Leiserson, Rivest and Stein's Algorithms book has a section (16.4) titled "Theoretical foundations for greedy methods" that discusses when the greedy methods yields an optimum solution. It covers many cases of practical interest, but not all greedy algorithms that yield optimum results can be understood in terms of this theory.
I also came across a paper titled "From Dynamic Programming To Greedy Algorithms" linked here that talks about certain greedy algorithms can be seen as refinements of dynamic programming. From a quick scan, it may be of interest to you.
There's really strict rule to know it. As someone already said, there are some things that should turn the red light on, but at the end, only experience will be able to tell you.
We apply greedy method when a decision can be made on the local information available at each stage.We are sure that following the set of decisions at each stage,we will find the optimal solution.
However, in dynamic approach we may not be sure about making a decision at one stage, so we carry a set of probable decisions , one of the probable elements may take to a solution.

Is there a master list of the Big-O notation for everything?

Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.
Dictionary of Algorithms and Data Structures is a fairly comprehensive list, and includes complexity (Big-O) in the algorithms' descriptions. If you need more information, it'll be in one of the linked references, and there's always Wikipedia as a fallback.
The Cormen book is more about teaching you how to prove what Big-O would be for a given algorithm, rather than rote memorization of algorithm to its Big-O performance. The former is far more valuable than the latter, and requires an investment on your part.
Try "Introduction to Algorithms" by Cormen, Leisersen, and Rivest. If its not in there its probably not worth knowing.
In c++ the STL standards is defined by the Big-O characteristics of the algorithms as well as the space requirements. That way you could switch between competing implementations of STL and still know that your program had the same-ish runtime characteristics.
Particularily good STL implementations could even special case lists of particular types to be better than the standard-requirements.
It made it easy to pick the correct iterator or list type for a particular problem, because you could easily weigh between space consumption and speed.
Ofcourse Big-O is only a guide line as all constants are removed. If an algorithm runs in k*O(n), it would be classified as O(n), but if k is sufficiently high it could be worse than O(n^2) for some values of n and m.
Introduction to Algorithms, Second Edition, aka CLRS (Cormen, Leiserson, Rivest, Stein), is the closest thing I can think of.
If that fails, then try The Art of Computer Programming, by Knuth. If it's not in those, you probably need to do some real research.
To anyone, who is coming to this question from Google.
http://bigocheatsheet.com/

Resources