3nlogn -2n is Big Omega(nlogn) - complexity-theory

I am not able to justify that 3nlogn-2n is big omega.
I have tried putting in different values of c and n for trade off but in every case where n>=2 (n>=2 because if n<2 log n become 0 and c becomes 0) which is not true for, big omega as c>0. so in every case I tried to put in f(n)
f(n)>=cg(n) for 3nlogn-2n as state by my book and i am ending up with f(n)<=cg(n)
just need a guide line i know there must be a simple answer I am mixing up things with, any help is appreciated.

Big Omega is a lower bound, and in your case we have that (since n > nlogn)
3nlog(n) - 2n > 3nlog(n) - 2nlog(n) = nlog(n)
And we immediatly got that it's in Omega(nlog(n))

Related

In steps, how do I analyse the running time for a certain algorithm as big theta

This algorithm is giving me trouble, I cannot find any sources online about dealing with the while loop that is also affected by the outer for loop. Is there a complicated process, or can you look from the loop that it is simply (outer loop = n , inner loop = %%%%) ? Any help is appreciated thank you.
Have you ever heard of the Logarithm operator? If a, b, n are real numbers such that an=b then lognb=a. The inner loop tells the computer that, multiply the number j by 2 a number of times (we don't know exactly what is this number, let's call it x) such that after this, j should equal to or exceed n.
Mathematically, this can be written as 2n > j * 2x ≥ n
Solve for x: 2n/j > 2x ≥ n/j ⇔ log2(2n/j) > x ≥ log2(n/j) ⇔ log2(n/j) + 1 > x ≥ log2(n/j)
As j increases from 1 to n, x decreases. From this point, I'll solve the problem in Big-O notation, your work is to convert it to Big-Theta notation
Since 1 is constant, it can be omitted. So x = log2(n/j), which is always less than log2(n). So we can say the running time of the inner loop is bounded above by O(log2n), which means the whole algorithm is bounded above by O(n.log2n).
Edit: For a better approximation and some corrections please read Paul Hankin's useful comments bellow this answer. Thanks to him.
PS: Stirling's approximation.

concept confusion, advices on solving these 2 code

In O() notation, write the complexity of the following code:
For i = 1 to x functi
call funct(i) if (x <= 0)
return some value
else
In O() notation, write the complexity of the following code:
For x = 1 to N
I'm really lost at solving these 2 big O notation complexity problem, please help!
They both appear to me to be O(N).
The first one subtracts by 1 when it calls itself, this means if given N, then it runs N times.
The second one divides the N by 2, but Big-O is determined by worst case scenario, which means that we must assume N is getting significantly larger. When you take that into account, dividing by 2 does not have much of a difference. That means while it originally is O(N/2) it can be reduced to O(N)

Finding Big-O, Omega and theta

I've looked through the links, and I'm too braindead to understand the mechanical process of figuring them out. I understand the ideas of O, theta and omega, and I understand the "Rules". So let me work on this example with you guys to clear this up in my head :)
f(n) = 100n+logn
g(n) = n+(logn)2
I need to find: whether f = O(g), or f = Ω(g), or both (in which case f = Θ(g))
so I know that 100n and n are the same, and they are both slower than log(n). I just need to figure out if (log(n))^2 is slower or faster. but I can't really remember anything about logs. if the log(n) is bigger, does it mean the number gets bigger or smaller?
let me please add my real struggle is in figuring out BOTH omega and theta. By definition f(n) <= g(n) if there is a constant c that will make g(n) bigger, and same for the reverse for omega. but how do I really test this?
You can usually figure it out from these rules:
Broadly k < log(n)^k < n^k < k^n. You can replace k at each step with any positive number you want and it remains true for large enough n.
If x is big, then 1/x is very close to 0.
For positive x and y, x < y if and only if log(x) < log(y). (Sometimes taking logs can help with complicated and messy products.
log(k^n) = log(k) n.
For O, theta, and omega, you can ignore everything except the biggest term that doesn't cancel out.
Rules 1 and 5 suffice for your specific questions. But learn all of the rules.
You don't need to remember rules, but rather learn general principles.
Here, all you need to know is that log(n) is increasing and grows without limit, and the definition of big-O, namely f = O(g) if there's a c such that for all sufficiently large n, f(n) <= c * g(n). You might learn the fact about log by remembering that log(n) grows like the number of digits of n.
Can log^2(n) be O(log(n))? That would mean (using the definition of big-O) that log^2(n) <= c.log(n) for all sufficiently large n, so log^2(n)/log(n) <= c for sufficiently large n (*). But log^2(n)/log(n) = log(n), which grows without limit, so can't be bounded by c. So log^2(n) = O(log(n)).
Can log(n) be O(log^2(n))? Well, at some point log(n) > 1 (since it's increasing without limit), and from that point on, log(n) < log^2(n). That proves that log(n) = O(log^2(n)), with the constant c equal to 1.
(*) If you're being extra careful, you need to exclude the possibility that log(n) is infinitely many times zero.

When analyzing the time complexity of an algorithm why do we drop the constant of the term with the largest degree

Suppose I have the following : T(n) = 5n^2 +2n
The asymtotic tight bound of this is theta n^2. I want to understand the reason behind dropping the 5. I understand why we ignore the lower order terms.
Consult the definition of big-O.
Keeping things simple[*], let's define that a function g is O(f) if there exist constants C and M such that for n > M, 0 <= g(n) < Cf(n).
The presence of a positive constant multiplier in f doesn't affect this, just choose C appropriately. Your example T is O(n^2) by choosing a value of C greater than 5, and a value of M big enough that the +2n is irrelevant. For for example, for n > 2 it's a fact that 5n^2 + 2n < 6n^2 (because n^2 > 2n), so with C= 6 and M = 2 we see that T(n) is O(n^2).
So it's true that T(n) is O(n^2), and also true that it's O(5n^2), and O(5n^2 + 2n). The most interesting of those facts is that it's O(n^2), since it's the simplest expression and the other two are logically equivalent. If we want to compare the complexities of different functions, then we want to use simple expressions.
For big-Theta just note that we can play the same trick when f and g are the other way around. The relation "g is Theta(f)" is an equivalence relation, so what are we going to choose as the representative member of the equivalence class of T? The simplest one.
[*] Keeping things less simple, we cope with negative numbers by using a limsup rather than a plain limit. My definition above is actually sufficient but not necessary.
wEverything comes back to the concept of "Order of Magnitute". Given something like
5n^2 +2n
You think that the 5 is significant, however when you break it down and think about the numbers, the constant really doesn't matter (graph it and you will see why). For example .. say n is 50.
5 * 50^2 + 2 * 50 --> 5 * 2500 + 2 * 50 --> 12,600
As you mentioned, the 2*n is insignificant when compared to n^2. This same concept applies when viewing the constant as well... you might think that 2500 vs 125,000 is a big difference; however consider if the algorithm was n^3... you are now looking at 12,600 vs 625,100
So the factor that will have the most significant difference on the cost of an algorithm is just n^2.
The constant is dropped because it does not affect which complexity class the function belongs to.
If you have two functions f(n) = c1 * n^2 and g(n) = c2 * n^3, where c1 and c2 are constants, it doesn't matter how large c1 is and how small c2 is, g(n) will ALWAYS overtake and outgrow f(n) at some value of n.

Algorithm analysis , Big O Notation Homework

Hi can someone explain me how to resolve this homework?
(n + log n)3^n = O((4^n)/n).
i think it's the same as resolving this inequality: (n + log n)3^n < c((4^n)/n)).
thanks in advance
You need to find a c (as you mentioned in your problem), and you need to show that the inequality holds for all n greater than some k.
By showing that you can find the c and k in question, then by definition you've proved the big-O bound.
Conversely, if you can't find such a c and k, this is because the function on the left is not really upper-bounded by the function on the right. That shouldn't be the case here, though (and you'll know you're getting a more intuitive understanding of asymptotic growth/bounding when you can articulate exactly why).
By definition, f(n) = O(g(n)) is true if there exists a constant M such that |f(n)| < M|g(n)| for every n. In computer science, numbers are nonnegative, so this amounts to finding an M such that f(n) / g(n) < M
This, in turn, can be done by proving that f(n) / g(n) has a finite limit as n increases towards infinity (by definition of a limit). Which, in the case of your (n^2 + n log n) * (3/4)^n is pretty obvious by virtue of how exponential functions work.

Resources