Substitution method in complexity theory - algorithm

Please advise if this answer should be moved to maths forum.
I'm quite confused about how we simply complexity theory equations.
For example, suppose we have this small Fibonacci algorithm:
And we're given the following information:
What I struggle to understand is how the formula T(n) is expanded and simplified, especially this:
what am I really missing here?
Thanks
Edit
This was taken from this book in page 775.

Let me rephrase the claim:
There exist some a and b such that T(n) < aFn - b
Now start the proof with
Chose b large enough to dominate the constant term.
Now the last inequality should be clear.

Related

What is the growth of this program? ChatGPT is wrong, right?

This is definitely a stupid, and simple question. But for some reason this is bugging me. This is not a homework question--insofar, as I am not solving a homework question for my own work. I am a mathematician, who's helping his friend with some of his math homework for his CS degree. And I am pretty certain I have the correct solution, but ChatGPT gives a different solution. And he wants to follow ChatGPT's solution, when I'm pretty sure it is wrong.
The pseudo code is simple
l=0
while n >= 1 do
for j=1, n, do
l = l+2
od
n = n/3
od
ChatGPT has said that this is O(n^2 log(n)), and that just doesn't make sense to me. It's explanation is also kind of nonsense. But my friend seems to be leaning toward ChatGPT, which I think is bull shit. I believe the correct bound is O(nlog(n)), and I've supplied the following reasoning.
The operation:
while n >= 1 do
n = n/3
od
grows likes O(log(n). The act of adding a for loop, as the one I used, looks like:
Sum_{j=1}^{log(n)} O(n/3^j)
This entire sum of operations, just becomes O(nlog(n))...
I'm pretty confident ChatGPT is wrong in this scenario, but I'd appreciate someone more well versed in this kind of stuff. I don't translate code to O notation at all in my work. If I'm wrong that's fine, but I'd appreciate a well thought out answer. For god's sakes, this program has me checking my sanity!
Update
I realize I may have spoken too soon to say whether O(n * log n) is the right answer. It's certainly a lot closer to a right answer than O(n² * log n).
Original answer
You are right. ChatGPT is wrong. Your logic is sound. You've got an O(n) operation nested inside an O(log n) operation, which comes out to O(n * log n).
This should not come as a surprise: ChatGPT is designed to write prose, and has a consistent track record of making a very convincing case for "facts" and solutions that are simply wrong. Your friend should get out of the habit of relying on ChatGPT for their homework answers for this reason, even if they're ignoring the ethics of the matter.
This question was answered by #moreON but I will present the answer properly, as I see it. Thank you #StriplingWarrior, you definitely quenched my fears that I was so off. I knew it was at least O(n log(n)), but the correct answer is O(n). Which you identified before me, after #moreON's comments.
The program f which takes n and gives l is given as:
l = 0
while n >=1 do
for k=1, k <= n, do
l = l+2
od
n = n/3
od
The time it takes to run this program, if we treat each single operation as O(1)--looks exactly like the equation:
sum_{i=1}^{log_3(n)} n/3^i
When we consider the growth in n. This comes about from the while loop doing about log_3(n) loops. From here, we do a while loop for at least 1 <= m = n/3. This does, for each term in the log_3(n) list, a larger division by three, n/3^i. We can write the growth in cpu time to evaluate n of f
sum_{i=1}^{log_3(n)} n/3^i = n O(1)
As a mathematician I should've seen this. But even if I did, I'd doubt that it's that easy. Or as simple as that. Just a bound on a relatively simple sum. I thought, at best we can rough bound it with O(nlog n). But, we can prove that the solution to our problem is:
f(n) = O(n)
I want to thank #moreON and #StriplingWarrior. You guys were right... I apologize for being a jack ass.

problem in finding the big O notation ordering from lower complexity to higher complexity

I'm a mechanical student studying data structures in Coursera. I got an assignment for ordering the time complexity lower to the higher time given below in the image. my answer was showing wrong and I'd like to know where I have done the mistake. The problem statement is attached here. I thought it was correct but I'm a beginner to this, so please help me. Thank you in advance.
As the growth of sqrt{n} is faster than \log{n}, 3 should be first and then 7. To better know this, suppose n = 2^(2k). We will have \log{n} = 2k and sqrt{n} = 2^k. Notice that you have done well for other cases.

Can Someone Verify My Answers on Asymptotic Analysis?

This for a data structures and algorithms course. I am confident in all of them but part d and I am not sure how to approach e. I know for part e, it is the sum of the harmonic series, and our professor told us it is bounded by (ln(n) + 1/n, ln(n) + 1) since there is no closed form representation for the sum of the harmonic series, but I am still not sure how to realize which has the faster or slower growth rate to determine how to classify them. If someone could review my answers and help me understand part e, I would appreciate it. Thank you.
The question: https://imgur.com/a/mzi0LL9
My answers: https://imgur.com/a/yxV6pim
Any function of the form is going to dominate a series like that.
We can factor out the constant to see that a bit easier and such a general harmonic series is bounded above by log.
So obviously we can ignore the 200 in big-O. In lieu of a proof since it seems one isn't required you can think about the intuition behind it. The summation as n grows will keep adding smaller and smaller terms but is going to keep growing to the point where is massive but 1/n is practically zero.

Why the time complexity of the most unefficient failure function in KMP algorithm is O(n³)? - edited

Oh, sorry about my explanation. Actually, I'm learning algorithm with my textbook, and now I 'm looking KMP algorithm. And in textbook, there are two ways to get failure function value. One is most efficient one as you said, O(n), and another one is the most unefficient one as I said above O(n³). Plus, there is no code in my book for O(n³) idea. Instead, the textbook says, "we can check all poossible prefix-suffix pair. If there is pattern P[1,.i], there is possible pair of i-1, and the time complexity is proportional to length, so (i-1) + (i-2)...+1 = i*(i-1)/2. So for all i, O(n³) is trivial ?
So my question is this. I can't understand the explanation in my textbook. Can you explain it???

Solving this recurrence without the master theorem. Backtracking Algorithm

I've made a backtracking algorithm.
I've been asked to say what is the complexity of this Algo.
I know that the equation is T(n) = 2T(n-1) + 3(n_hat), where n_hat is the initial n. Meaning it doesn't decrease in each step.
The thing is that I'm getting quite lost on calculating this thing. I believe it's around 2**n * something. But my calculations are a bit confusing. Can you help me please? Thanks!
Let's expand this formula repeatedly by substituting into itself:

Resources