Solving this recurrence without the master theorem. Backtracking Algorithm - algorithm

I've made a backtracking algorithm.
I've been asked to say what is the complexity of this Algo.
I know that the equation is T(n) = 2T(n-1) + 3(n_hat), where n_hat is the initial n. Meaning it doesn't decrease in each step.
The thing is that I'm getting quite lost on calculating this thing. I believe it's around 2**n * something. But my calculations are a bit confusing. Can you help me please? Thanks!

Let's expand this formula repeatedly by substituting into itself:

Related

Pseudo-code for a recurrence relation with a number of recursive calls proportional to the time complexity

I run into an exercise from a book on "algorithms and data structures" that is giving me some trouble.
I need to write the pseudo-code of a recursive algorithm regulated by the recurrence relation:
T(n) = T(n-1)*T(n-2) + T(n-3) + O(1) for n>10
without solving the relation.
I suspect there is no such an algorithm but I am unsure.
In my attempts to find a solution, I have evaluated k=T(n-1) and called the algorithm on n-2 for k times. Reasoning in this way is not correct because I need to add a cost for estimating T(n-1) to the relation (for instance, I can estimate the cost in a iterative way in O(n) or I may call the algorithm on n-1 if the algorithm return the cost. The latter would add T(n-1) to the recurrence relation).
I’d be thankful if someone could give me an hint and showing me where my reasoning is wrong.
In general, how should be structured an algorithm with a number of recursive calls equals to T(n-1)*T(n-2)?
Tks

Why the time complexity of the most unefficient failure function in KMP algorithm is O(n³)? - edited

Oh, sorry about my explanation. Actually, I'm learning algorithm with my textbook, and now I 'm looking KMP algorithm. And in textbook, there are two ways to get failure function value. One is most efficient one as you said, O(n), and another one is the most unefficient one as I said above O(n³). Plus, there is no code in my book for O(n³) idea. Instead, the textbook says, "we can check all poossible prefix-suffix pair. If there is pattern P[1,.i], there is possible pair of i-1, and the time complexity is proportional to length, so (i-1) + (i-2)...+1 = i*(i-1)/2. So for all i, O(n³) is trivial ?
So my question is this. I can't understand the explanation in my textbook. Can you explain it???

Substitution method in complexity theory

Please advise if this answer should be moved to maths forum.
I'm quite confused about how we simply complexity theory equations.
For example, suppose we have this small Fibonacci algorithm:
And we're given the following information:
What I struggle to understand is how the formula T(n) is expanded and simplified, especially this:
what am I really missing here?
Thanks
Edit
This was taken from this book in page 775.
Let me rephrase the claim:
There exist some a and b such that T(n) < aFn - b
Now start the proof with
Chose b large enough to dominate the constant term.
Now the last inequality should be clear.

Asymptotic Notations and forming Recurrence relations by analysing the algorithms

I went through many lectures, videos and sources regarding Asymptotic notations. I understood what O, Omega and Theta were. But in algorithms, why do we use only Big Oh notation always, why not Theta and Omega (I know it sounds noobish, but please help me with this). What exactly is this upperbound and lowerbound in accordance with Algorithms?
My next question is, how do we find the complexity from an algorithm. Say I have an algorithm, how do I find the recurrence relation T(N) and then compute the complexity out of it? How do I form these equations? Like in the case of Linear Search using Recursive way, T(n)=T(N-1) + 1. How?
It would be great if someone can explain me considering me a noob so that I can understand even better. I found some answers but wasn't convincing enough in StackOverFlow.
Thank you.
Why we use big-O so much compared to Theta and Omega: This is partly cultural, rather than technical. It is extremely common for people to say big-O when Theta would really be more appropriate. Omega doesn't get used much in practice both because we frequently are more concerned about upper bounds than lower bounds, and also because non-trivial lower bounds are often much more difficult to prove. (Trivial lower bounds are usually the kind that say "You have to look at all of the input, so the running time is at least equal to the size of the input.")
Of course, these comments about lower bounds also partly explain Theta, since Theta involves both an upper bound and a lower bound.
Coming up with a recurrence relation: There's no simple recipe that addresses all cases. Here's a description for relatively simple recursive algorithmms.
Let N be the size of the initial input. Suppose there are R recursive calls in your recursive function. (Example: for mergesort, R would be 2.) Further suppose that all the recursive calls reduce the size of the initial input by the same amount, from N to M. (Example: for mergesort, M would be N/2.) And, finally, suppose that the recursive function does W work outside of the recursive calls. (Example: for mergesort, W would be N for the merge.)
Then the recurrence relation would be T(N) = R*T(M) + W. (Example: for mergesort, this would be T(N) = 2*T(N/2) + N.)
When we create an algorithm, it's always in order to be the fastest and we need to consider every case. This is why we use O, because we want to major the complexity and be sure that our algorithm will never overtake this.
To assess the complexity, you have to count the number of step. In the equation T(n) = T(n-1) + 1, there is gonna be N step before compute T(0), then the complixity is linear. (I'm talking about time complexity and not space complexity).

Calculating time complexity in case of recursion algorithms?

How do you calculate time complexity in case of recursion algorithms?
for eg t(n) = t(3n/2) + 0(1) (Heapsort)
Use the Master Theorem.
Anyway, your equation looks broken, since recursive calls have higher input values than that of the caller, so your complexity is O(infinity).
Please fix it.
Master's theorm is the quick and short way. But since you are trying to learn the complexity for all recursive functions, I would rather suggest you to learn the working of recursion tree, which forms the foundation of Master's Theorm . This link goes on to explain it in detail. Rather than using the Master's theorm blindly, learn this for your better understanding in the future ! This link about recursion tree is a good read too
usually you can guess the answer and use induction to prove it.
but there is a theorem which solves a lot of situations as heap sort, named Master Theorem:
http://en.wikipedia.org/wiki/Master_theorem
Complexity of Heapsort

Resources