Order assertions according to order of growth? - time

So, I have a problem I'm supposed to do in which I am supposed to put the following assertions in order according to order of growth
2nlogn
0.000001n^3
1983n^2 + n
n + 456
3^n + 5n
and I don't know how to go about doing this. I'm supposed to organize them in a fashion like 2nlogn <= n + 456 <= etc....
I'm not looking for the answer- I just need some advice on how exactly to do this. I know the order of growth table but it doesn't help me here.

If you know the order of growth table, that's mostly it!
The other big thing to know is that coefficients and lower order terms don't matter. For example,
3n^2 ~ n^2 > 1000n ~ n
If that's difficult to understand, imagine what happens as n becomes extremely large. n^2 grows quickly as n increases, while 1000n only increases by 1000 each time n increases by 1. By the time n=1000, then n^2 = 1000n, and as n increases even more, n^2 overtakes 1000n.
So, for any function like n^2 + 1000n, split it into different terms that are added together (multiplication is different, e.g. n*log(n) is different from n or log(n)). you can ignore everything but the largest term, in this case n^2. Once you do this and strip away coefficients, just use the order of growth table to get your solution!
https://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions

Related

Time complexity of nested loop with two different parameter

How to work out the time complexity of this algorithm, both O() and Ω().
This nested loop is different from the common nested loop analysis, because m and n are independent such that |A| = n ≥ |B| = m.
What I am thinking is for each iteration, it will run O(1) time.
For the times of iteration (line 3 and line 4), it should be
m + (m − 1) + ... + 1 + (n - m) = O(m^2 + n)
Therefore, the time of this algorithm should be O(m^2 + n) and so is Ω.
However, the solution tells me it should be O(mn) and Ω(mn). I cannot figure it out how to get that answer.
A quite terse explanation of Can Berk Güder
O denotes an upper bound, but this bound might or might not be tight.
Ω denotes a lower bound, but this bound might or might not be tight.
In your case, we don't consider the specific values (possible loops) there will be, but just in mathematical perspective, it's O(mn) and Ω(mn) since there is a n outer loop and m inner loop.
And a more straightforward definition in wiki perhaps shed more light on your puzzle.
Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size).

While calculating Big(oh), Where from upper bound g(n) comes?

For example i have f(N) = 5N +3 from a program. I want to know what is the big (oh) of this function. we say higher order term O(N).
Is this correct method to find big(oh) of any program by dropping lower orders terms and constants?
If we got O(N) by simply looking on that complexity function 5N+3. then, what is the purpose of this formula F(N) <= C* G(N)?
i got to know that, this formula is just for comparing two functions. my question is,
In this formula, F(N) <= C* G(N), i have F(N) = 5N+3, but what is this upper bound G(N)? Where it comes from? where from will we take it?
i have studied many books, and many posts, but i am still facing confusions.
Q: Is this correct method to find big(oh) of any program by dropping
lower orders terms and constants?
Yes, most people who have at least some experience with examining time complexities use this method.
Q: If we got O(N) by simply looking on that complexity function 5N+3.
then, what is the purpose of this formula F(N) <= C* G(N)?
To formally prove that you correctly estimated big-oh for certain algorithm. Imagine that you have F(N) = 5N^2 + 10 and (incorrectly) conclude that the big-oh complexity for this example is O(N). By using this formula you can quickly see that this is not true because there does not exist constant C such that for large values of N holds 5N^2 + 10 <= C * N. This would imply C >= 5N + 10/N, but no matter how large constant C you choose, there is always N larger than this constant, so this inequality does not hold.
Q: In this formula, F(N) <= C* G(N), i have F(N) = 5N+3, but what is this
upper bound G(N)? Where it comes from? where from will we take it?
It comes from examining F(N), specifically by finding its highest order term. You should have some math knowledge to estimate which function grows faster than the other, for start check this useful link. There are several classes of complexities - constant, logarithmic, polynomial, exponential.. However, in most cases it is easy to find the highest order term for any function. If you are not sure, you can always plot a graph of a function or formally prove that one function grows faster than the other. For example, if F(N) = log(N^3) + sqrt(N) maybe it is not clear at first glance what's the highest order term, but if you calculate or plot log(N^3) for N = 1, 10 and 1000 and sqrt(N) for same values, it is immediately clear that sqrt(N) grows faster, so big-oh for this function is O(sqrt(N)).

Omitting lowest growing term from Big O notation

I am currently learning about big O notation but there is a concept that's confusing me. If for 8N^2 + 4N + 3 the complexity class would be N^2 because this is the fastest growing term. And for 5N the complexity class is N.
Then is it correct to say that of NLogN the complexity class is N since N grows faster than LogN?
The problem I'm trying to solve is that if configuration A consists of a fast algorithm that takes 5NLogN operations to sort a list on a computer that runs 10^6 operations per seconds and configuration B consists of a slow algorithm that takes N**2 operations to sort a list and is run on a computer that runs 10^9 operations per second. for smaller arrays
configuration 1 is faster, but for larger arrays configuration 2 is better. For what size of array does this transition occur?
What I thought was if I equated expressions for the time it took to solve the problem then I could get an N for the transition point however that yielded the equation N^2/10^9 = 5NLogN/10^6 which simplifies to N/5000 = LogN which is not solvable.
Thank you
In mathematics, the definition of f = O(g) for two real-valued functions defined on the reals, is that f(n)/g(n) is bounded when n approaches infinity. In other words, there exists a constant A, such that for all n, f(n)/g(n) < A.
In your first example, (8n^2 + 4n + 3)/n^2 = 8 + 4/n + 3/n^2 which is bounded when n approaches infinity (by 15, for example), so 8n^2 + 4n + 3 is O(n^2). On the other hand, nlog(n)/n = log(n) which approaches infinity when n approaches infinity, so nlog(n) is not O(n). It is however O(n^2), because nlog(n)/n^2 = log(n)/n which is bounded (it approches zero near infinity).
As to your actual problem, remember that if you can't solve an equation symbolically you can always resolve it numerically. The existence of solutions is clear.
Let's suppose that the base of your logarithm is b, so we are to compare
5N * log(b, N)
with
N^2
5N * log(b, N) = log(b, N^(5N))
N^2 = N^2 * log(b, b) = log(b, b^(N^2))
So we compare
N ^ (5N) with b^(N^2)
Let's compare them and analyze the relative value of (N^5N) / (b^(N^2)) compared to 1. You will observe that after a sertain limit it is smaller than 1.
Q: is it correct to say that of NLogN the complexity class is N?
A: No, here is why we can ignore smaller terms:
Consider N^2 + 1000000 N
For small values of N, the second term is the only one which matters, but as N grows, that does not matter. Consider the ratio 1000000N / N^2, which shows the relative size of the two terms. Reduce to 10000000/N, which approaches zero as N approaches infinity. Therefore the second term has less and less importance as N grows, literally approaching zero.
It is not just "smaller," it is irrelevant for sufficiently large N.
That is not true for multiplicands. n log n is always significantly bigger than n, by a margin that continues to increase.
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
Nop, because N and log(N) are multiplied and log(N) isn't constant.
N/5000 = LogN
Roughly 55.000
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
No, when you omit you should omit a TERM. When you have NLgN it is, as a whole, called a term. As of what you're suggesting then: NNN = (N^2)*N. And since N^2 has bigger growth rate we omit N. Which is completely WRONG. The order is N^3 not N^2. And NLgN works in the same manner. You only omit when the term is added/subtracted.
For example, NLgN + N = NLgN because it has faster growth than N.
The problem I'm trying to solve is that if configuration A consists of
a fast algorithm that takes 5NLogN operations to sort a list on a
computer that runs 10^6 operations per seconds and configuration B
consists of a slow algorithm that takes N**2 operations to sort a list
and is run on a computer that runs 10^9 operations per second. for
smaller arrays configuration 1 is faster, but for larger arrays
configuration 2 is better. For what size of array does this transition
occur?
This CANNOT be true. It is the absolute OPPOSITE. For small N values the faster computer with N^2 is better. For very large N the slower computer with NLgN is better.
Where is the point? Well, the second computer is 1000 times faster than the first one. So they will be equal in speed when N^2 = 1000NLgN which solves to N~=14,500. So for N<14,500 then N^2 will go faster (since the computer is 1000 times faster) but for N>14,500 the slower computer will be much faster. Now imagine N=1,000,000. The faster computer will need 50 times more than what the slower computer needs because N^2 = 50,000 NLgN and it is 1000 times faster.
Note: the calculations were made using the Big O where constant factors are omitted. And the logarithm used is of the base 2. In algorithms complexity analysis we usually use LgN not LogN where LgN is log N to the base 2 and LogN is log N to the base 10.
However, referring to CLRS (good book, I recommend reading it) the Big O defines as:
Take a look at this graph for better understanding:
It is all about N > No. So all the rules of the Big O notation are valid FOR BIG VALUES OF N. For small N it is NOT necessarily correct. I mean, for N=5 it is not necessary that the Big O will give a close approximation on the running time.
I hope this gives a good answer for the question.
Reference: Chapter3, Section1, [CLRS] Introduction To Algorithms, 3rd Edition.

Run time and Big Theta Analysis

So, I am going over Big Theta Analysis and run time analysis.
I have the code snippet
sum = 0
for i in range(0,n)
for j in range (0,i**2)
if j % 2 == 0:
for k in range(0,j):
sum += 1
I am claiming that this has a run time of n^3*log(n^2). The reason for this is the first line gives us n and inside that the second loop would give us n^2 so we have n^3, but where I am unsure is the log(n^2). I know that we are looking for evens which will give us about half the values so maybe it would be n^2/2, but am a bit unsure.
The second part is finding a g(n) such that f(n) is Theta(g(n)). So if i have a run time of n^3 I know that g(n) would be n^3 as well since it is both in O(g(n) and Omega(g(n)). I just want to make sure I understand that correctly.
Your suspicion is correct that the log(n^2) part is actually n^2/2. You typically see log(n) in cases like binary search where you're dividing the size of the input on each iteration.

Handling 1/2^n when determining big-O runtime?

I have to find the big-O Notation of the following expression:
2n + n(logn)10 + (1/2)n
If I ignore the coefficients, I get 2n + n (log n)10 plus some term involving 1/2. If I ignore the coefficients, I completely lose the last term, but it doesn't seem right to include them.
How should I handle the (1/2)n term?
For large n, (1/2)n approaches 0 and becomes negligible. Also, 2n eventually becomes negligible compared to n(logn)10, since the latter grows faster.
Comparing n(logn)10 to 2n is equivalent to comparing (logn)10 to 2 (since both contain a factor of n). Clearly, (logn)10 will surpass 2 for large enough ns -- actually all it takes is an n of 3. As n grows further, the difference between these two terms will increase as well and the significance of the 2n term will become smaller and smaller.
Therefore, the big O expression we're left with is
O(n(logn)10)
Think about what happens to (1/2)n as n gets large. This term gets smaller and smaller and smaller and eventually becomes completely negligible. (In fact, if you pick n = 30, it's smaller than 1 / 1,000,000,000.) One useful observation is that (1/2)n) is never greater than 1/2, so you could note that
2n + n(log n)10 + (1/2)n ≤ 2n + n(log n)10 + 1/2
From there, you can see that this is O(n (log n)10), since the n (log n)10 term grows faster than the 2n term.
Normally, though, you have to be careful with exponentials. Anything of the form an for a > 1 will grow faster than any polynomial, so normally you would drop the polynomials and leave the exponential. Here, you do the opposite.
Hope this helps!

Resources