master theorem f(n) = nlogn - algorithm

I am working Problem 4-3 from Introduction to Algorithm, 3rd Edition. And I am asked to find the asymptotic upper and lower bounds for T(n):
T(n) = 4T(n/3) + n lg(n)
I have browsed online for the solution and the solution says:
By master's theorem, we get T(n) ∈ Θ(nlog3(4))
I believe that the solution assumes that nlog34 is asymptotically larger than n lg(n)? But why is this true? I will be grateful if someone can help me understand!

In layman's terms:
We need to compare grow of n*log(n) with n^1.25 (log3(4)~1.26)
Divide both functions by n
log(n) vs n^(1/4)
Both are increasing.
Derivatives of both functions
n^(-1) vs n^(-3/4)
Derivative of the second one is clearly larger, so the second function grows faster
We can see that plots of these functions intersect and power function becomes larger for big n values - for any power>1

Related

BigO Notation, understanding

I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.

While calculating Big(oh), Where from upper bound g(n) comes?

For example i have f(N) = 5N +3 from a program. I want to know what is the big (oh) of this function. we say higher order term O(N).
Is this correct method to find big(oh) of any program by dropping lower orders terms and constants?
If we got O(N) by simply looking on that complexity function 5N+3. then, what is the purpose of this formula F(N) <= C* G(N)?
i got to know that, this formula is just for comparing two functions. my question is,
In this formula, F(N) <= C* G(N), i have F(N) = 5N+3, but what is this upper bound G(N)? Where it comes from? where from will we take it?
i have studied many books, and many posts, but i am still facing confusions.
Q: Is this correct method to find big(oh) of any program by dropping
lower orders terms and constants?
Yes, most people who have at least some experience with examining time complexities use this method.
Q: If we got O(N) by simply looking on that complexity function 5N+3.
then, what is the purpose of this formula F(N) <= C* G(N)?
To formally prove that you correctly estimated big-oh for certain algorithm. Imagine that you have F(N) = 5N^2 + 10 and (incorrectly) conclude that the big-oh complexity for this example is O(N). By using this formula you can quickly see that this is not true because there does not exist constant C such that for large values of N holds 5N^2 + 10 <= C * N. This would imply C >= 5N + 10/N, but no matter how large constant C you choose, there is always N larger than this constant, so this inequality does not hold.
Q: In this formula, F(N) <= C* G(N), i have F(N) = 5N+3, but what is this
upper bound G(N)? Where it comes from? where from will we take it?
It comes from examining F(N), specifically by finding its highest order term. You should have some math knowledge to estimate which function grows faster than the other, for start check this useful link. There are several classes of complexities - constant, logarithmic, polynomial, exponential.. However, in most cases it is easy to find the highest order term for any function. If you are not sure, you can always plot a graph of a function or formally prove that one function grows faster than the other. For example, if F(N) = log(N^3) + sqrt(N) maybe it is not clear at first glance what's the highest order term, but if you calculate or plot log(N^3) for N = 1, 10 and 1000 and sqrt(N) for same values, it is immediately clear that sqrt(N) grows faster, so big-oh for this function is O(sqrt(N)).

How to estimate the time complexity for insertion sort by measuring the gradient of a log-log plot?

This is the graph which I am expected to analyze. I have to find the gradient (slope) and from that I am expected to deduce the time complexity.
I have found that the slope is equal to 1,91. If that is true what else should I do?
Quotient of logarithms is approximately 2. What does it mean when removing the logarithms?
log(T(n)) / log(n) = 2
log(T(n)) = 2 * log(n)
log(T(n)) = log(n²)
T(n) = n²
T(n) denotes algorithm’s time complexity. Of course we are talking in asymptotic terms, i.e. using Big O notation we say that
T(n) ∈ O(n²).
You measured the value 2 for large inputs and you are assuming it will remain the same even for all bigger ones.
You can read more at a page by one of the tutors at University of Toronto. It uses basic calculus to explain how it works. Still, the idea behind all this is that logarithms make multiplicative constants from constant exponents and additive constants from multiplicative constants.
Also regarding interpretation of the plot, a similar question popped up here on Stack Overflow recently: Log-log plot/graph of algorithm time complexity
But note that this is really just an estimation of time complexity. You cannot prove time complexity of an algorithm by just running it on a finite set of inputs. This method can give you a good guess on what to try to prove using analysis of the algorithm, though.

Asymptotic Notations and forming Recurrence relations by analysing the algorithms

I went through many lectures, videos and sources regarding Asymptotic notations. I understood what O, Omega and Theta were. But in algorithms, why do we use only Big Oh notation always, why not Theta and Omega (I know it sounds noobish, but please help me with this). What exactly is this upperbound and lowerbound in accordance with Algorithms?
My next question is, how do we find the complexity from an algorithm. Say I have an algorithm, how do I find the recurrence relation T(N) and then compute the complexity out of it? How do I form these equations? Like in the case of Linear Search using Recursive way, T(n)=T(N-1) + 1. How?
It would be great if someone can explain me considering me a noob so that I can understand even better. I found some answers but wasn't convincing enough in StackOverFlow.
Thank you.
Why we use big-O so much compared to Theta and Omega: This is partly cultural, rather than technical. It is extremely common for people to say big-O when Theta would really be more appropriate. Omega doesn't get used much in practice both because we frequently are more concerned about upper bounds than lower bounds, and also because non-trivial lower bounds are often much more difficult to prove. (Trivial lower bounds are usually the kind that say "You have to look at all of the input, so the running time is at least equal to the size of the input.")
Of course, these comments about lower bounds also partly explain Theta, since Theta involves both an upper bound and a lower bound.
Coming up with a recurrence relation: There's no simple recipe that addresses all cases. Here's a description for relatively simple recursive algorithmms.
Let N be the size of the initial input. Suppose there are R recursive calls in your recursive function. (Example: for mergesort, R would be 2.) Further suppose that all the recursive calls reduce the size of the initial input by the same amount, from N to M. (Example: for mergesort, M would be N/2.) And, finally, suppose that the recursive function does W work outside of the recursive calls. (Example: for mergesort, W would be N for the merge.)
Then the recurrence relation would be T(N) = R*T(M) + W. (Example: for mergesort, this would be T(N) = 2*T(N/2) + N.)
When we create an algorithm, it's always in order to be the fastest and we need to consider every case. This is why we use O, because we want to major the complexity and be sure that our algorithm will never overtake this.
To assess the complexity, you have to count the number of step. In the equation T(n) = T(n-1) + 1, there is gonna be N step before compute T(0), then the complixity is linear. (I'm talking about time complexity and not space complexity).

Ordering a list of complexities (Big O)

Given a list of complexities:
How do you order then in their Big O order?
I think the answer is below?
Question now is how does log(n!) become n log(n). Also I don't know if I got the n! and (n-1)! right. Is it possible that c^n can be bigger than n!? When c > n?
In general how do I visualize such Big O problem ... it took me quite long to do this ... compared to coding so far ... Any resources, videos MIT Open Courseware resources, something with explaination
You might want to see how the functions grow. Here's a quick plot from Wolfram Alpha:
link
In general, n^n grows much faster than c^n for any n greater than some n_0 (because n will overtake c at some point, even if c is extremely large). log grows much slower than quadratic or exponential, and slightly faster than linear.
For O(log(n!)) = O(nlogn), I believe there was something called Stirling's Approximation. It boils down to seeing that O(n!) = O(n^n) as n! = n*(n-1)*(n-2)*...*2*1, so n^n = n*n*n*...*n is an upper bound. It can be proven that is it a lower bound as well, but you don't need that.
Since log(n^n) = nlogn by log rules, O(log(n!) = O(log(n^n)) = O(nlogn).

Resources